Test Report: KVM_Linux_crio 18649

                    
                      7e28b54b3772a78cf87e91422424e940246c9ed2:2024-04-16:34054
                    
                

Test fail (30/319)

Order failed test Duration
39 TestAddons/parallel/Ingress 159.46
53 TestAddons/StoppedEnableDisable 154.29
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 14.81
172 TestMultiControlPlane/serial/StopSecondaryNode 142.04
174 TestMultiControlPlane/serial/RestartSecondaryNode 48.42
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 409.69
179 TestMultiControlPlane/serial/StopCluster 142.26
239 TestMultiNode/serial/RestartKeepsNodes 307.03
241 TestMultiNode/serial/StopMultiNode 141.7
248 TestPreload 277.97
256 TestKubernetesUpgrade 467.21
268 TestStartStop/group/old-k8s-version/serial/FirstStart 300.88
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.56
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 107.66
298 TestStartStop/group/no-preload/serial/Stop 139.15
299 TestStartStop/group/embed-certs/serial/Stop 139.09
302 TestStartStop/group/old-k8s-version/serial/SecondStart 513.84
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.43
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.22
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.26
315 TestPause/serial/SecondStartNoReconfiguration 53.22
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.16
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 331.68
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 413.75
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 356.4
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.4
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 393.79
x
+
TestAddons/parallel/Ingress (159.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-320546 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-320546 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-320546 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ddf62631-814a-41e0-96ed-ec74b1056618] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ddf62631-814a-41e0-96ed-ec74b1056618] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004814172s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-320546 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.459848049s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-320546 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.101
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-320546 addons disable ingress-dns --alsologtostderr -v=1: (1.768138429s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-320546 addons disable ingress --alsologtostderr -v=1: (7.95593807s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-320546 -n addons-320546
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-320546 logs -n 25: (1.392033253s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-348353                                                                     | download-only-348353 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-080115                                                                     | download-only-080115 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-794654                                                                     | download-only-794654 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-348353                                                                     | download-only-348353 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-249934 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | binary-mirror-249934                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:41139                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-249934                                                                     | binary-mirror-249934 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| addons  | disable dashboard -p                                                                        | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | addons-320546                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | addons-320546                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-320546 --wait=true                                                                | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-320546 ssh cat                                                                       | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | /opt/local-path-provisioner/pvc-af6913b5-62da-4d3c-913d-34caa313684f_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-320546 addons disable                                                                | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-320546 addons                                                                        | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-320546 ip                                                                            | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	| addons  | addons-320546 addons disable                                                                | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | addons-320546                                                                               |                      |         |                |                     |                     |
	| addons  | addons-320546 addons disable                                                                | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | addons-320546                                                                               |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | -p addons-320546                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | -p addons-320546                                                                            |                      |         |                |                     |                     |
	| ssh     | addons-320546 ssh curl -s                                                                   | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| addons  | addons-320546 addons                                                                        | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-320546 addons                                                                        | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:22 UTC | 16 Apr 24 16:22 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-320546 ip                                                                            | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:25 UTC | 16 Apr 24 16:25 UTC |
	| addons  | addons-320546 addons disable                                                                | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:25 UTC | 16 Apr 24 16:25 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-320546 addons disable                                                                | addons-320546        | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:25 UTC | 16 Apr 24 16:25 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:45.323414   11693 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:45.323530   11693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:45.323539   11693 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:45.323543   11693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:45.323755   11693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:19:45.324325   11693 out.go:298] Setting JSON to false
	I0416 16:19:45.325201   11693 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":137,"bootTime":1713284248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:45.325259   11693 start.go:139] virtualization: kvm guest
	I0416 16:19:45.327329   11693 out.go:177] * [addons-320546] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:45.328575   11693 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:19:45.328582   11693 notify.go:220] Checking for updates...
	I0416 16:19:45.330876   11693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:45.332098   11693 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:19:45.333345   11693 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:19:45.334580   11693 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:19:45.335710   11693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:19:45.336912   11693 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:19:45.366803   11693 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 16:19:45.368139   11693 start.go:297] selected driver: kvm2
	I0416 16:19:45.368154   11693 start.go:901] validating driver "kvm2" against <nil>
	I0416 16:19:45.368171   11693 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:19:45.368857   11693 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:45.368947   11693 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:19:45.382988   11693 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:19:45.383058   11693 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:19:45.383266   11693 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:19:45.383332   11693 cni.go:84] Creating CNI manager for ""
	I0416 16:19:45.383345   11693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 16:19:45.383353   11693 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 16:19:45.383402   11693 start.go:340] cluster config:
	{Name:addons-320546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-320546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:19:45.383495   11693 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:45.385158   11693 out.go:177] * Starting "addons-320546" primary control-plane node in "addons-320546" cluster
	I0416 16:19:45.386335   11693 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:19:45.386370   11693 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 16:19:45.386377   11693 cache.go:56] Caching tarball of preloaded images
	I0416 16:19:45.386469   11693 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:19:45.386479   11693 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:19:45.386751   11693 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/config.json ...
	I0416 16:19:45.386768   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/config.json: {Name:mk30d096c935b17b57772a7a3c960dc7bcfe84ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:19:45.386882   11693 start.go:360] acquireMachinesLock for addons-320546: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:19:45.386923   11693 start.go:364] duration metric: took 28.892µs to acquireMachinesLock for "addons-320546"
	I0416 16:19:45.386940   11693 start.go:93] Provisioning new machine with config: &{Name:addons-320546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-320546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:19:45.386994   11693 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 16:19:45.388552   11693 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0416 16:19:45.388666   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:19:45.388698   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:19:45.402274   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0416 16:19:45.402683   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:19:45.403191   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:19:45.403222   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:19:45.403547   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:19:45.403717   11693 main.go:141] libmachine: (addons-320546) Calling .GetMachineName
	I0416 16:19:45.403822   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:19:45.403949   11693 start.go:159] libmachine.API.Create for "addons-320546" (driver="kvm2")
	I0416 16:19:45.403978   11693 client.go:168] LocalClient.Create starting
	I0416 16:19:45.404026   11693 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 16:19:45.485009   11693 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 16:19:45.620662   11693 main.go:141] libmachine: Running pre-create checks...
	I0416 16:19:45.620686   11693 main.go:141] libmachine: (addons-320546) Calling .PreCreateCheck
	I0416 16:19:45.621181   11693 main.go:141] libmachine: (addons-320546) Calling .GetConfigRaw
	I0416 16:19:45.621578   11693 main.go:141] libmachine: Creating machine...
	I0416 16:19:45.621593   11693 main.go:141] libmachine: (addons-320546) Calling .Create
	I0416 16:19:45.621727   11693 main.go:141] libmachine: (addons-320546) Creating KVM machine...
	I0416 16:19:45.622946   11693 main.go:141] libmachine: (addons-320546) DBG | found existing default KVM network
	I0416 16:19:45.623615   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:45.623485   11715 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0416 16:19:45.623649   11693 main.go:141] libmachine: (addons-320546) DBG | created network xml: 
	I0416 16:19:45.623660   11693 main.go:141] libmachine: (addons-320546) DBG | <network>
	I0416 16:19:45.623666   11693 main.go:141] libmachine: (addons-320546) DBG |   <name>mk-addons-320546</name>
	I0416 16:19:45.623672   11693 main.go:141] libmachine: (addons-320546) DBG |   <dns enable='no'/>
	I0416 16:19:45.623676   11693 main.go:141] libmachine: (addons-320546) DBG |   
	I0416 16:19:45.623683   11693 main.go:141] libmachine: (addons-320546) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0416 16:19:45.623694   11693 main.go:141] libmachine: (addons-320546) DBG |     <dhcp>
	I0416 16:19:45.623706   11693 main.go:141] libmachine: (addons-320546) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0416 16:19:45.623724   11693 main.go:141] libmachine: (addons-320546) DBG |     </dhcp>
	I0416 16:19:45.623735   11693 main.go:141] libmachine: (addons-320546) DBG |   </ip>
	I0416 16:19:45.623741   11693 main.go:141] libmachine: (addons-320546) DBG |   
	I0416 16:19:45.623768   11693 main.go:141] libmachine: (addons-320546) DBG | </network>
	I0416 16:19:45.623787   11693 main.go:141] libmachine: (addons-320546) DBG | 
	I0416 16:19:45.628754   11693 main.go:141] libmachine: (addons-320546) DBG | trying to create private KVM network mk-addons-320546 192.168.39.0/24...
	I0416 16:19:45.691270   11693 main.go:141] libmachine: (addons-320546) DBG | private KVM network mk-addons-320546 192.168.39.0/24 created
	I0416 16:19:45.691297   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:45.691219   11715 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:19:45.691313   11693 main.go:141] libmachine: (addons-320546) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546 ...
	I0416 16:19:45.691334   11693 main.go:141] libmachine: (addons-320546) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:19:45.691425   11693 main.go:141] libmachine: (addons-320546) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:19:45.908783   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:45.908604   11715 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa...
	I0416 16:19:46.157968   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:46.157832   11715 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/addons-320546.rawdisk...
	I0416 16:19:46.158006   11693 main.go:141] libmachine: (addons-320546) DBG | Writing magic tar header
	I0416 16:19:46.158020   11693 main.go:141] libmachine: (addons-320546) DBG | Writing SSH key tar header
	I0416 16:19:46.158032   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:46.157964   11715 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546 ...
	I0416 16:19:46.158063   11693 main.go:141] libmachine: (addons-320546) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546
	I0416 16:19:46.158116   11693 main.go:141] libmachine: (addons-320546) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 16:19:46.158141   11693 main.go:141] libmachine: (addons-320546) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546 (perms=drwx------)
	I0416 16:19:46.158165   11693 main.go:141] libmachine: (addons-320546) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:19:46.158175   11693 main.go:141] libmachine: (addons-320546) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 16:19:46.158181   11693 main.go:141] libmachine: (addons-320546) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:19:46.158188   11693 main.go:141] libmachine: (addons-320546) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:19:46.158195   11693 main.go:141] libmachine: (addons-320546) DBG | Checking permissions on dir: /home
	I0416 16:19:46.158205   11693 main.go:141] libmachine: (addons-320546) DBG | Skipping /home - not owner
	I0416 16:19:46.158226   11693 main.go:141] libmachine: (addons-320546) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:19:46.158237   11693 main.go:141] libmachine: (addons-320546) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 16:19:46.158246   11693 main.go:141] libmachine: (addons-320546) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 16:19:46.158254   11693 main.go:141] libmachine: (addons-320546) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:19:46.158265   11693 main.go:141] libmachine: (addons-320546) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:19:46.158284   11693 main.go:141] libmachine: (addons-320546) Creating domain...
	I0416 16:19:46.159274   11693 main.go:141] libmachine: (addons-320546) define libvirt domain using xml: 
	I0416 16:19:46.159301   11693 main.go:141] libmachine: (addons-320546) <domain type='kvm'>
	I0416 16:19:46.159312   11693 main.go:141] libmachine: (addons-320546)   <name>addons-320546</name>
	I0416 16:19:46.159319   11693 main.go:141] libmachine: (addons-320546)   <memory unit='MiB'>4000</memory>
	I0416 16:19:46.159328   11693 main.go:141] libmachine: (addons-320546)   <vcpu>2</vcpu>
	I0416 16:19:46.159333   11693 main.go:141] libmachine: (addons-320546)   <features>
	I0416 16:19:46.159338   11693 main.go:141] libmachine: (addons-320546)     <acpi/>
	I0416 16:19:46.159346   11693 main.go:141] libmachine: (addons-320546)     <apic/>
	I0416 16:19:46.159351   11693 main.go:141] libmachine: (addons-320546)     <pae/>
	I0416 16:19:46.159355   11693 main.go:141] libmachine: (addons-320546)     
	I0416 16:19:46.159361   11693 main.go:141] libmachine: (addons-320546)   </features>
	I0416 16:19:46.159366   11693 main.go:141] libmachine: (addons-320546)   <cpu mode='host-passthrough'>
	I0416 16:19:46.159372   11693 main.go:141] libmachine: (addons-320546)   
	I0416 16:19:46.159380   11693 main.go:141] libmachine: (addons-320546)   </cpu>
	I0416 16:19:46.159406   11693 main.go:141] libmachine: (addons-320546)   <os>
	I0416 16:19:46.159422   11693 main.go:141] libmachine: (addons-320546)     <type>hvm</type>
	I0416 16:19:46.159429   11693 main.go:141] libmachine: (addons-320546)     <boot dev='cdrom'/>
	I0416 16:19:46.159436   11693 main.go:141] libmachine: (addons-320546)     <boot dev='hd'/>
	I0416 16:19:46.159441   11693 main.go:141] libmachine: (addons-320546)     <bootmenu enable='no'/>
	I0416 16:19:46.159448   11693 main.go:141] libmachine: (addons-320546)   </os>
	I0416 16:19:46.159453   11693 main.go:141] libmachine: (addons-320546)   <devices>
	I0416 16:19:46.159464   11693 main.go:141] libmachine: (addons-320546)     <disk type='file' device='cdrom'>
	I0416 16:19:46.159475   11693 main.go:141] libmachine: (addons-320546)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/boot2docker.iso'/>
	I0416 16:19:46.159483   11693 main.go:141] libmachine: (addons-320546)       <target dev='hdc' bus='scsi'/>
	I0416 16:19:46.159518   11693 main.go:141] libmachine: (addons-320546)       <readonly/>
	I0416 16:19:46.159544   11693 main.go:141] libmachine: (addons-320546)     </disk>
	I0416 16:19:46.159562   11693 main.go:141] libmachine: (addons-320546)     <disk type='file' device='disk'>
	I0416 16:19:46.159580   11693 main.go:141] libmachine: (addons-320546)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:19:46.159598   11693 main.go:141] libmachine: (addons-320546)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/addons-320546.rawdisk'/>
	I0416 16:19:46.159609   11693 main.go:141] libmachine: (addons-320546)       <target dev='hda' bus='virtio'/>
	I0416 16:19:46.159617   11693 main.go:141] libmachine: (addons-320546)     </disk>
	I0416 16:19:46.159623   11693 main.go:141] libmachine: (addons-320546)     <interface type='network'>
	I0416 16:19:46.159632   11693 main.go:141] libmachine: (addons-320546)       <source network='mk-addons-320546'/>
	I0416 16:19:46.159640   11693 main.go:141] libmachine: (addons-320546)       <model type='virtio'/>
	I0416 16:19:46.159652   11693 main.go:141] libmachine: (addons-320546)     </interface>
	I0416 16:19:46.159665   11693 main.go:141] libmachine: (addons-320546)     <interface type='network'>
	I0416 16:19:46.159679   11693 main.go:141] libmachine: (addons-320546)       <source network='default'/>
	I0416 16:19:46.159689   11693 main.go:141] libmachine: (addons-320546)       <model type='virtio'/>
	I0416 16:19:46.159701   11693 main.go:141] libmachine: (addons-320546)     </interface>
	I0416 16:19:46.159711   11693 main.go:141] libmachine: (addons-320546)     <serial type='pty'>
	I0416 16:19:46.159732   11693 main.go:141] libmachine: (addons-320546)       <target port='0'/>
	I0416 16:19:46.159747   11693 main.go:141] libmachine: (addons-320546)     </serial>
	I0416 16:19:46.159766   11693 main.go:141] libmachine: (addons-320546)     <console type='pty'>
	I0416 16:19:46.159781   11693 main.go:141] libmachine: (addons-320546)       <target type='serial' port='0'/>
	I0416 16:19:46.159793   11693 main.go:141] libmachine: (addons-320546)     </console>
	I0416 16:19:46.159804   11693 main.go:141] libmachine: (addons-320546)     <rng model='virtio'>
	I0416 16:19:46.159824   11693 main.go:141] libmachine: (addons-320546)       <backend model='random'>/dev/random</backend>
	I0416 16:19:46.159844   11693 main.go:141] libmachine: (addons-320546)     </rng>
	I0416 16:19:46.159860   11693 main.go:141] libmachine: (addons-320546)     
	I0416 16:19:46.159871   11693 main.go:141] libmachine: (addons-320546)     
	I0416 16:19:46.159898   11693 main.go:141] libmachine: (addons-320546)   </devices>
	I0416 16:19:46.159916   11693 main.go:141] libmachine: (addons-320546) </domain>
	I0416 16:19:46.159931   11693 main.go:141] libmachine: (addons-320546) 
	I0416 16:19:46.166382   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:e4:49:1b in network default
	I0416 16:19:46.166997   11693 main.go:141] libmachine: (addons-320546) Ensuring networks are active...
	I0416 16:19:46.167012   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:46.167585   11693 main.go:141] libmachine: (addons-320546) Ensuring network default is active
	I0416 16:19:46.167934   11693 main.go:141] libmachine: (addons-320546) Ensuring network mk-addons-320546 is active
	I0416 16:19:46.169238   11693 main.go:141] libmachine: (addons-320546) Getting domain xml...
	I0416 16:19:46.169849   11693 main.go:141] libmachine: (addons-320546) Creating domain...
	I0416 16:19:47.536668   11693 main.go:141] libmachine: (addons-320546) Waiting to get IP...
	I0416 16:19:47.537419   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:47.537741   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:47.537770   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:47.537705   11715 retry.go:31] will retry after 214.646572ms: waiting for machine to come up
	I0416 16:19:47.754213   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:47.754568   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:47.754620   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:47.754537   11715 retry.go:31] will retry after 324.169325ms: waiting for machine to come up
	I0416 16:19:48.080151   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:48.080578   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:48.080613   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:48.080532   11715 retry.go:31] will retry after 310.364246ms: waiting for machine to come up
	I0416 16:19:48.391877   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:48.392330   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:48.392355   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:48.392298   11715 retry.go:31] will retry after 519.214015ms: waiting for machine to come up
	I0416 16:19:48.912716   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:48.913186   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:48.913225   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:48.913124   11715 retry.go:31] will retry after 551.050473ms: waiting for machine to come up
	I0416 16:19:49.465741   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:49.466179   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:49.466208   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:49.466110   11715 retry.go:31] will retry after 892.492925ms: waiting for machine to come up
	I0416 16:19:50.360128   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:50.360762   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:50.360787   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:50.360707   11715 retry.go:31] will retry after 1.16738417s: waiting for machine to come up
	I0416 16:19:51.529447   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:51.529872   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:51.529893   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:51.529814   11715 retry.go:31] will retry after 1.365918051s: waiting for machine to come up
	I0416 16:19:52.897389   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:52.897818   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:52.897849   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:52.897770   11715 retry.go:31] will retry after 1.233721721s: waiting for machine to come up
	I0416 16:19:54.133589   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:54.133971   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:54.134000   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:54.133909   11715 retry.go:31] will retry after 1.952704628s: waiting for machine to come up
	I0416 16:19:56.087931   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:56.088350   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:56.088379   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:56.088301   11715 retry.go:31] will retry after 2.543403259s: waiting for machine to come up
	I0416 16:19:58.633774   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:19:58.634199   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:19:58.634229   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:19:58.634151   11715 retry.go:31] will retry after 2.67443032s: waiting for machine to come up
	I0416 16:20:01.310419   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:01.310776   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:20:01.310807   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:20:01.310724   11715 retry.go:31] will retry after 2.917554793s: waiting for machine to come up
	I0416 16:20:04.229481   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:04.229908   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find current IP address of domain addons-320546 in network mk-addons-320546
	I0416 16:20:04.229936   11693 main.go:141] libmachine: (addons-320546) DBG | I0416 16:20:04.229864   11715 retry.go:31] will retry after 4.321641306s: waiting for machine to come up
	I0416 16:20:08.554938   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.555365   11693 main.go:141] libmachine: (addons-320546) Found IP for machine: 192.168.39.101
	I0416 16:20:08.555396   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has current primary IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.555405   11693 main.go:141] libmachine: (addons-320546) Reserving static IP address...
	I0416 16:20:08.555647   11693 main.go:141] libmachine: (addons-320546) DBG | unable to find host DHCP lease matching {name: "addons-320546", mac: "52:54:00:c8:f0:9d", ip: "192.168.39.101"} in network mk-addons-320546
	I0416 16:20:08.625269   11693 main.go:141] libmachine: (addons-320546) DBG | Getting to WaitForSSH function...
	I0416 16:20:08.625289   11693 main.go:141] libmachine: (addons-320546) Reserved static IP address: 192.168.39.101
	I0416 16:20:08.625301   11693 main.go:141] libmachine: (addons-320546) Waiting for SSH to be available...
	I0416 16:20:08.627696   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.627948   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:08.627975   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.628126   11693 main.go:141] libmachine: (addons-320546) DBG | Using SSH client type: external
	I0416 16:20:08.628157   11693 main.go:141] libmachine: (addons-320546) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa (-rw-------)
	I0416 16:20:08.628204   11693 main.go:141] libmachine: (addons-320546) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:20:08.628246   11693 main.go:141] libmachine: (addons-320546) DBG | About to run SSH command:
	I0416 16:20:08.628276   11693 main.go:141] libmachine: (addons-320546) DBG | exit 0
	I0416 16:20:08.765258   11693 main.go:141] libmachine: (addons-320546) DBG | SSH cmd err, output: <nil>: 
	I0416 16:20:08.765516   11693 main.go:141] libmachine: (addons-320546) KVM machine creation complete!
	I0416 16:20:08.765788   11693 main.go:141] libmachine: (addons-320546) Calling .GetConfigRaw
	I0416 16:20:08.766313   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:08.766507   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:08.766659   11693 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:20:08.766674   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:08.767786   11693 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:20:08.767800   11693 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:20:08.767806   11693 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:20:08.767812   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:08.770058   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.770464   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:08.770488   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.770664   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:08.770829   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:08.771007   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:08.771151   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:08.771362   11693 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:08.771538   11693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0416 16:20:08.771550   11693 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:20:08.884160   11693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:20:08.884187   11693 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:20:08.884197   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:08.886863   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.887250   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:08.887278   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:08.887387   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:08.887582   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:08.887750   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:08.887881   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:08.888057   11693 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:08.888260   11693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0416 16:20:08.888273   11693 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:20:09.002201   11693 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:20:09.002275   11693 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:20:09.002282   11693 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:20:09.002289   11693 main.go:141] libmachine: (addons-320546) Calling .GetMachineName
	I0416 16:20:09.002531   11693 buildroot.go:166] provisioning hostname "addons-320546"
	I0416 16:20:09.002554   11693 main.go:141] libmachine: (addons-320546) Calling .GetMachineName
	I0416 16:20:09.002834   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:09.005295   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.005594   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.005617   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.005790   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:09.005956   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.006159   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.006280   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:09.006404   11693 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:09.006568   11693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0416 16:20:09.006580   11693 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-320546 && echo "addons-320546" | sudo tee /etc/hostname
	I0416 16:20:09.137198   11693 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-320546
	
	I0416 16:20:09.137241   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:09.139676   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.140082   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.140107   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.140307   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:09.140517   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.140701   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.140892   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:09.141074   11693 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:09.141268   11693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0416 16:20:09.141286   11693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-320546' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-320546/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-320546' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:20:09.263552   11693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:20:09.263587   11693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:20:09.263636   11693 buildroot.go:174] setting up certificates
	I0416 16:20:09.263656   11693 provision.go:84] configureAuth start
	I0416 16:20:09.263697   11693 main.go:141] libmachine: (addons-320546) Calling .GetMachineName
	I0416 16:20:09.263980   11693 main.go:141] libmachine: (addons-320546) Calling .GetIP
	I0416 16:20:09.266393   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.266725   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.266764   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.266868   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:09.269915   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.270313   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.270336   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.270446   11693 provision.go:143] copyHostCerts
	I0416 16:20:09.270509   11693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:20:09.270621   11693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:20:09.270687   11693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:20:09.270779   11693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.addons-320546 san=[127.0.0.1 192.168.39.101 addons-320546 localhost minikube]
	I0416 16:20:09.381391   11693 provision.go:177] copyRemoteCerts
	I0416 16:20:09.381445   11693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:20:09.381465   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:09.383902   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.384150   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.384177   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.384326   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:09.384490   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.384686   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:09.384880   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:09.472779   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:20:09.499153   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:20:09.524361   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:20:09.550366   11693 provision.go:87] duration metric: took 286.677192ms to configureAuth
	I0416 16:20:09.550391   11693 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:20:09.550577   11693 config.go:182] Loaded profile config "addons-320546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:20:09.550701   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:09.553435   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.553825   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.553852   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.553995   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:09.554196   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.554383   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.554544   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:09.554730   11693 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:09.554977   11693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0416 16:20:09.555002   11693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:20:09.840959   11693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:20:09.841005   11693 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:20:09.841016   11693 main.go:141] libmachine: (addons-320546) Calling .GetURL
	I0416 16:20:09.842330   11693 main.go:141] libmachine: (addons-320546) DBG | Using libvirt version 6000000
	I0416 16:20:09.844167   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.844468   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.844499   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.844618   11693 main.go:141] libmachine: Docker is up and running!
	I0416 16:20:09.844631   11693 main.go:141] libmachine: Reticulating splines...
	I0416 16:20:09.844638   11693 client.go:171] duration metric: took 24.440653056s to LocalClient.Create
	I0416 16:20:09.844660   11693 start.go:167] duration metric: took 24.440713414s to libmachine.API.Create "addons-320546"
	I0416 16:20:09.844676   11693 start.go:293] postStartSetup for "addons-320546" (driver="kvm2")
	I0416 16:20:09.844690   11693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:20:09.844708   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:09.844935   11693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:20:09.844959   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:09.846706   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.846989   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.847021   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.847237   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:09.847461   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.847619   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:09.847790   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:09.936869   11693 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:20:09.941629   11693 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:20:09.941654   11693 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:20:09.941728   11693 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:20:09.941760   11693 start.go:296] duration metric: took 97.073888ms for postStartSetup
	I0416 16:20:09.941799   11693 main.go:141] libmachine: (addons-320546) Calling .GetConfigRaw
	I0416 16:20:09.942352   11693 main.go:141] libmachine: (addons-320546) Calling .GetIP
	I0416 16:20:09.944488   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.944768   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.944789   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.945026   11693 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/config.json ...
	I0416 16:20:09.945195   11693 start.go:128] duration metric: took 24.558190448s to createHost
	I0416 16:20:09.945215   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:09.947155   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.947463   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:09.947499   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:09.947615   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:09.947769   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.947933   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:09.948052   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:09.948199   11693 main.go:141] libmachine: Using SSH client type: native
	I0416 16:20:09.948337   11693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0416 16:20:09.948348   11693 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:20:10.062057   11693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713284410.043176337
	
	I0416 16:20:10.062082   11693 fix.go:216] guest clock: 1713284410.043176337
	I0416 16:20:10.062089   11693 fix.go:229] Guest: 2024-04-16 16:20:10.043176337 +0000 UTC Remote: 2024-04-16 16:20:09.945205368 +0000 UTC m=+24.670594212 (delta=97.970969ms)
	I0416 16:20:10.062106   11693 fix.go:200] guest clock delta is within tolerance: 97.970969ms
	I0416 16:20:10.062112   11693 start.go:83] releasing machines lock for "addons-320546", held for 24.67517799s
	I0416 16:20:10.062134   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:10.062420   11693 main.go:141] libmachine: (addons-320546) Calling .GetIP
	I0416 16:20:10.064967   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:10.065338   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:10.065373   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:10.065548   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:10.066007   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:10.066168   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:10.066259   11693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:20:10.066300   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:10.066387   11693 ssh_runner.go:195] Run: cat /version.json
	I0416 16:20:10.066404   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:10.068694   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:10.068981   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:10.069007   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:10.069138   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:10.069143   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:10.069352   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:10.069502   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:10.069513   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:10.069536   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:10.069646   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:10.069803   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:10.069967   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:10.070236   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:10.070382   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:10.174938   11693 ssh_runner.go:195] Run: systemctl --version
	I0416 16:20:10.181751   11693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:20:10.347380   11693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:20:10.354914   11693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:20:10.354961   11693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:20:10.372100   11693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:20:10.372127   11693 start.go:494] detecting cgroup driver to use...
	I0416 16:20:10.372191   11693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:20:10.388597   11693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:20:10.403156   11693 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:20:10.403211   11693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:20:10.417392   11693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:20:10.431674   11693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:20:10.552157   11693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:20:10.690420   11693 docker.go:233] disabling docker service ...
	I0416 16:20:10.690499   11693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:20:10.713872   11693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:20:10.729255   11693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:20:10.867856   11693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:20:10.981365   11693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:20:10.997754   11693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:20:11.018066   11693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:20:11.018116   11693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:20:11.030059   11693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:20:11.030122   11693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:20:11.041933   11693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:20:11.054036   11693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:20:11.066085   11693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:20:11.078615   11693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:20:11.090903   11693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:20:11.110339   11693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:20:11.122582   11693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:20:11.133556   11693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:20:11.133605   11693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:20:11.148889   11693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:20:11.159870   11693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:11.287963   11693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:20:11.432671   11693 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:20:11.432784   11693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:20:11.438086   11693 start.go:562] Will wait 60s for crictl version
	I0416 16:20:11.438148   11693 ssh_runner.go:195] Run: which crictl
	I0416 16:20:11.442580   11693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:20:11.483339   11693 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:20:11.483457   11693 ssh_runner.go:195] Run: crio --version
	I0416 16:20:11.513172   11693 ssh_runner.go:195] Run: crio --version
	I0416 16:20:11.545530   11693 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:20:11.546921   11693 main.go:141] libmachine: (addons-320546) Calling .GetIP
	I0416 16:20:11.549504   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:11.549884   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:11.549913   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:11.550109   11693 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:20:11.554830   11693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:20:11.568830   11693 kubeadm.go:877] updating cluster {Name:addons-320546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-320546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:20:11.568942   11693 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:20:11.568979   11693 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:11.604643   11693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 16:20:11.604713   11693 ssh_runner.go:195] Run: which lz4
	I0416 16:20:11.609420   11693 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:20:11.614386   11693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:20:11.614409   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 16:20:13.201389   11693 crio.go:462] duration metric: took 1.591991029s to copy over tarball
	I0416 16:20:13.201472   11693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:20:15.944781   11693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.743280826s)
	I0416 16:20:15.944812   11693 crio.go:469] duration metric: took 2.74339099s to extract the tarball
	I0416 16:20:15.944821   11693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:20:15.984287   11693 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:20:16.028188   11693 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 16:20:16.028216   11693 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:20:16.028225   11693 kubeadm.go:928] updating node { 192.168.39.101 8443 v1.29.3 crio true true} ...
	I0416 16:20:16.028356   11693 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-320546 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-320546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:20:16.028421   11693 ssh_runner.go:195] Run: crio config
	I0416 16:20:16.078548   11693 cni.go:84] Creating CNI manager for ""
	I0416 16:20:16.078569   11693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 16:20:16.078580   11693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:20:16.078604   11693 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.101 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-320546 NodeName:addons-320546 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:20:16.078750   11693 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-320546"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:20:16.078819   11693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:20:16.090234   11693 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:20:16.090300   11693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 16:20:16.100706   11693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0416 16:20:16.119545   11693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:20:16.137522   11693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0416 16:20:16.155707   11693 ssh_runner.go:195] Run: grep 192.168.39.101	control-plane.minikube.internal$ /etc/hosts
	I0416 16:20:16.160448   11693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:20:16.174278   11693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:16.287381   11693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:20:16.304442   11693 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546 for IP: 192.168.39.101
	I0416 16:20:16.304462   11693 certs.go:194] generating shared ca certs ...
	I0416 16:20:16.304478   11693 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.304632   11693 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:20:16.449760   11693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt ...
	I0416 16:20:16.449786   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt: {Name:mkc9f08914a151295c9fd67cf167c3dd065cccd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.449956   11693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key ...
	I0416 16:20:16.449972   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key: {Name:mk03082a3bca352310c4320658321478eb4d67fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.450061   11693 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:20:16.697149   11693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt ...
	I0416 16:20:16.697177   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt: {Name:mke4395651cd0a231df8a84bfdab5f660275a2a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.697354   11693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key ...
	I0416 16:20:16.697371   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key: {Name:mkb986e594054ee767c91ea9679b59e3c9a9dc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.697468   11693 certs.go:256] generating profile certs ...
	I0416 16:20:16.697535   11693 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.key
	I0416 16:20:16.697552   11693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt with IP's: []
	I0416 16:20:16.893043   11693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt ...
	I0416 16:20:16.893071   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: {Name:mk6a39e1db7fb90d571b526037775e1cc9f4b54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.893263   11693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.key ...
	I0416 16:20:16.893278   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.key: {Name:mkbb49a14645c49549cd6807bdb76c217742d8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.893367   11693 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.key.8e3e8f93
	I0416 16:20:16.893401   11693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.crt.8e3e8f93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.101]
	I0416 16:20:16.969462   11693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.crt.8e3e8f93 ...
	I0416 16:20:16.969492   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.crt.8e3e8f93: {Name:mk5d9949b18720fbff67e950e26c9fcc64883f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.969648   11693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.key.8e3e8f93 ...
	I0416 16:20:16.969667   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.key.8e3e8f93: {Name:mka31a7724d3c97a1fd7c705ed775c3fdb603f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:16.969759   11693 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.crt.8e3e8f93 -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.crt
	I0416 16:20:16.969865   11693 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.key.8e3e8f93 -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.key
	I0416 16:20:16.969931   11693 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.key
	I0416 16:20:16.969957   11693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.crt with IP's: []
	I0416 16:20:17.096065   11693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.crt ...
	I0416 16:20:17.096093   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.crt: {Name:mk3d2031d5ddbcbd90f4079101a7ad207e8a9c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:17.096264   11693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.key ...
	I0416 16:20:17.096280   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.key: {Name:mk4b31683239a2a053e96767a52e81e7d6978f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:17.096494   11693 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:20:17.096534   11693 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:20:17.096568   11693 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:20:17.096598   11693 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:20:17.097239   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:20:17.130470   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:20:17.167102   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:20:17.194476   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:20:17.221262   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0416 16:20:17.247831   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:20:17.275875   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:20:17.303786   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:20:17.329653   11693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:20:17.355197   11693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:20:17.373751   11693 ssh_runner.go:195] Run: openssl version
	I0416 16:20:17.381885   11693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:20:17.394059   11693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:17.399569   11693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:17.399625   11693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:20:17.407944   11693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:20:17.422185   11693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:20:17.426595   11693 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:20:17.426640   11693 kubeadm.go:391] StartCluster: {Name:addons-320546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-320546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:20:17.426723   11693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 16:20:17.426781   11693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:20:17.472401   11693 cri.go:89] found id: ""
	I0416 16:20:17.472473   11693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:20:17.483580   11693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:20:17.494235   11693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:20:17.504781   11693 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:20:17.504815   11693 kubeadm.go:156] found existing configuration files:
	
	I0416 16:20:17.504876   11693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:20:17.514841   11693 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:20:17.514889   11693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:20:17.525514   11693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:20:17.535481   11693 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:20:17.535531   11693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:20:17.545981   11693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:20:17.556049   11693 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:20:17.556091   11693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:20:17.566410   11693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:20:17.577395   11693 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:20:17.577452   11693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:20:17.588402   11693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:20:17.778692   11693 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:20:28.762986   11693 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:20:28.763059   11693 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:20:28.763168   11693 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:20:28.763319   11693 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:20:28.763455   11693 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:20:28.763549   11693 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:20:28.765329   11693 out.go:204]   - Generating certificates and keys ...
	I0416 16:20:28.765388   11693 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:20:28.765451   11693 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:20:28.765538   11693 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:20:28.765598   11693 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:20:28.765677   11693 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:20:28.765786   11693 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:20:28.765887   11693 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:20:28.766035   11693 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-320546 localhost] and IPs [192.168.39.101 127.0.0.1 ::1]
	I0416 16:20:28.766117   11693 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:20:28.766294   11693 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-320546 localhost] and IPs [192.168.39.101 127.0.0.1 ::1]
	I0416 16:20:28.766384   11693 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:20:28.766485   11693 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:20:28.766547   11693 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:20:28.766641   11693 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:20:28.766724   11693 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:20:28.766822   11693 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:20:28.766890   11693 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:20:28.766970   11693 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:20:28.767048   11693 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:20:28.767166   11693 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:20:28.767242   11693 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:20:28.768877   11693 out.go:204]   - Booting up control plane ...
	I0416 16:20:28.768983   11693 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:20:28.769092   11693 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:20:28.769152   11693 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:20:28.769254   11693 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:20:28.769333   11693 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:20:28.769403   11693 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:20:28.769571   11693 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:20:28.769713   11693 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003545 seconds
	I0416 16:20:28.769875   11693 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:20:28.770034   11693 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:20:28.770094   11693 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:20:28.770248   11693 kubeadm.go:309] [mark-control-plane] Marking the node addons-320546 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:20:28.770302   11693 kubeadm.go:309] [bootstrap-token] Using token: 31ppyx.vmxv6qotf559cj21
	I0416 16:20:28.771817   11693 out.go:204]   - Configuring RBAC rules ...
	I0416 16:20:28.771902   11693 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:20:28.772002   11693 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:20:28.772197   11693 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:20:28.772312   11693 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:20:28.772427   11693 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:20:28.772514   11693 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:20:28.772616   11693 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:20:28.772655   11693 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:20:28.772702   11693 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:20:28.772708   11693 kubeadm.go:309] 
	I0416 16:20:28.772757   11693 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:20:28.772764   11693 kubeadm.go:309] 
	I0416 16:20:28.772847   11693 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:20:28.772858   11693 kubeadm.go:309] 
	I0416 16:20:28.772890   11693 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:20:28.772982   11693 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:20:28.773042   11693 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:20:28.773051   11693 kubeadm.go:309] 
	I0416 16:20:28.773108   11693 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:20:28.773114   11693 kubeadm.go:309] 
	I0416 16:20:28.773152   11693 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:20:28.773159   11693 kubeadm.go:309] 
	I0416 16:20:28.773241   11693 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:20:28.773359   11693 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:20:28.773460   11693 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:20:28.773471   11693 kubeadm.go:309] 
	I0416 16:20:28.773584   11693 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:20:28.773714   11693 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:20:28.773730   11693 kubeadm.go:309] 
	I0416 16:20:28.773849   11693 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 31ppyx.vmxv6qotf559cj21 \
	I0416 16:20:28.773962   11693 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 16:20:28.773984   11693 kubeadm.go:309] 	--control-plane 
	I0416 16:20:28.773988   11693 kubeadm.go:309] 
	I0416 16:20:28.774056   11693 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:20:28.774075   11693 kubeadm.go:309] 
	I0416 16:20:28.774195   11693 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 31ppyx.vmxv6qotf559cj21 \
	I0416 16:20:28.774345   11693 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 16:20:28.774399   11693 cni.go:84] Creating CNI manager for ""
	I0416 16:20:28.774413   11693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 16:20:28.777049   11693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 16:20:28.778308   11693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 16:20:28.815563   11693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 16:20:28.901295   11693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:20:28.901392   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:28.901392   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-320546 minikube.k8s.io/updated_at=2024_04_16T16_20_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=addons-320546 minikube.k8s.io/primary=true
	I0416 16:20:28.953401   11693 ops.go:34] apiserver oom_adj: -16
	I0416 16:20:29.088684   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:29.588818   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:30.089627   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:30.589407   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:31.089334   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:31.589600   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:32.089396   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:32.589348   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:33.089403   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:33.589549   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:34.089552   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:34.589287   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:35.088955   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:35.589388   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:36.089214   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:36.588754   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:37.089420   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:37.588929   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:38.089375   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:38.589718   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:39.089290   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:39.589466   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:40.088820   11693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:20:40.207902   11693 kubeadm.go:1107] duration metric: took 11.306572626s to wait for elevateKubeSystemPrivileges
	W0416 16:20:40.207940   11693 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:20:40.207949   11693 kubeadm.go:393] duration metric: took 22.781312739s to StartCluster
	I0416 16:20:40.207984   11693 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:40.208106   11693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:20:40.208494   11693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:20:40.208694   11693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:20:40.208728   11693 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:20:40.210798   11693 out.go:177] * Verifying Kubernetes components...
	I0416 16:20:40.208809   11693 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0416 16:20:40.209012   11693 config.go:182] Loaded profile config "addons-320546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:20:40.210900   11693 addons.go:69] Setting yakd=true in profile "addons-320546"
	I0416 16:20:40.212467   11693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:20:40.212474   11693 addons.go:234] Setting addon yakd=true in "addons-320546"
	I0416 16:20:40.212502   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.210898   11693 addons.go:69] Setting ingress=true in profile "addons-320546"
	I0416 16:20:40.212556   11693 addons.go:234] Setting addon ingress=true in "addons-320546"
	I0416 16:20:40.210907   11693 addons.go:69] Setting ingress-dns=true in profile "addons-320546"
	I0416 16:20:40.212603   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.212659   11693 addons.go:234] Setting addon ingress-dns=true in "addons-320546"
	I0416 16:20:40.210908   11693 addons.go:69] Setting gcp-auth=true in profile "addons-320546"
	I0416 16:20:40.212731   11693 mustload.go:65] Loading cluster: addons-320546
	I0416 16:20:40.212750   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.210913   11693 addons.go:69] Setting inspektor-gadget=true in profile "addons-320546"
	I0416 16:20:40.212828   11693 addons.go:234] Setting addon inspektor-gadget=true in "addons-320546"
	I0416 16:20:40.210914   11693 addons.go:69] Setting helm-tiller=true in profile "addons-320546"
	I0416 16:20:40.210918   11693 addons.go:69] Setting metrics-server=true in profile "addons-320546"
	I0416 16:20:40.210920   11693 addons.go:69] Setting cloud-spanner=true in profile "addons-320546"
	I0416 16:20:40.210927   11693 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-320546"
	I0416 16:20:40.210935   11693 addons.go:69] Setting storage-provisioner=true in profile "addons-320546"
	I0416 16:20:40.210935   11693 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-320546"
	I0416 16:20:40.210944   11693 addons.go:69] Setting registry=true in profile "addons-320546"
	I0416 16:20:40.210944   11693 addons.go:69] Setting volumesnapshots=true in profile "addons-320546"
	I0416 16:20:40.210931   11693 addons.go:69] Setting default-storageclass=true in profile "addons-320546"
	I0416 16:20:40.210953   11693 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-320546"
	I0416 16:20:40.212960   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.212965   11693 config.go:182] Loaded profile config "addons-320546": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:20:40.212982   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.212990   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.212998   11693 addons.go:234] Setting addon helm-tiller=true in "addons-320546"
	I0416 16:20:40.213028   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.213038   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213059   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213085   11693 addons.go:234] Setting addon storage-provisioner=true in "addons-320546"
	I0416 16:20:40.213127   11693 addons.go:234] Setting addon metrics-server=true in "addons-320546"
	I0416 16:20:40.213188   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.213236   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.212969   11693 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-320546"
	I0416 16:20:40.213289   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213307   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.213346   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213383   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213388   11693 addons.go:234] Setting addon cloud-spanner=true in "addons-320546"
	I0416 16:20:40.213401   11693 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-320546"
	I0416 16:20:40.213416   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213432   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213435   11693 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-320546"
	I0416 16:20:40.213456   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.213458   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.213470   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213486   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213545   11693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-320546"
	I0416 16:20:40.213581   11693 addons.go:234] Setting addon registry=true in "addons-320546"
	I0416 16:20:40.213608   11693 addons.go:234] Setting addon volumesnapshots=true in "addons-320546"
	I0416 16:20:40.213785   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213806   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213800   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213803   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213854   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213862   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.213902   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.213914   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213934   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213942   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.213986   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.214029   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.214294   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.214368   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.214373   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.214397   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.214406   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.214417   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.214589   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.234215   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I0416 16:20:40.234219   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0416 16:20:40.234839   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.234891   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.235295   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40997
	I0416 16:20:40.235600   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.235621   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.235662   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.235676   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.235723   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.236013   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.236168   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.236188   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.236246   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.236600   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.236992   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.237053   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.237122   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.237161   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.238261   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I0416 16:20:40.241449   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0416 16:20:40.241511   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.241551   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.241851   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.241884   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.255108   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37583
	I0416 16:20:40.255193   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.255318   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0416 16:20:40.255922   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.256054   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.256256   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.256268   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.256325   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.256429   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.256468   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.256615   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.256782   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.256795   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.256889   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.257352   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.257413   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.257458   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.257469   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.257836   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.257869   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.258098   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.259210   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.259246   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.260714   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.261069   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.261104   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.261768   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.261818   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.262129   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0416 16:20:40.262679   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.263234   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.263252   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.263672   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.264191   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.264222   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.280215   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44637
	I0416 16:20:40.282374   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0416 16:20:40.282866   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.283262   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I0416 16:20:40.283468   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.283486   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.283884   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.284093   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.285148   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.285324   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I0416 16:20:40.285493   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I0416 16:20:40.285666   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.285767   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.285934   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.285946   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.286238   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.286255   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.286314   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.286779   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.286838   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.287829   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.287892   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.289376   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.289456   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0416 16:20:40.292109   11693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:20:40.289883   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.292020   11693 addons.go:234] Setting addon default-storageclass=true in "addons-320546"
	I0416 16:20:40.293044   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.293745   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.293811   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0416 16:20:40.295327   11693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0416 16:20:40.293998   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.294267   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.294393   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.294687   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.294735   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.299140   11693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:20:40.297787   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.297805   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.297854   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.298023   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0416 16:20:40.298136   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.298352   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.300236   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0416 16:20:40.300540   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.300642   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.300667   11693 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0416 16:20:40.300682   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0416 16:20:40.300700   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.300865   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.300971   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.301023   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.302245   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.302335   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.302410   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.302575   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.302618   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.302936   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.302954   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.303074   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.303090   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.303220   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.303240   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0416 16:20:40.303255   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.303409   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.303687   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.303709   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.303761   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.303913   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.304148   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.304203   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.304453   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.304931   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.304950   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.305169   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
	I0416 16:20:40.305327   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.305667   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.305810   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.305836   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.305964   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.306102   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.306227   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.306416   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.306778   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.306850   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.309097   11693 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0416 16:20:40.307482   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.308379   11693 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-320546"
	I0416 16:20:40.308461   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.310427   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.310468   11693 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0416 16:20:40.310483   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0416 16:20:40.310501   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.310534   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:40.312086   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0416 16:20:40.310920   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.310923   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.315065   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0416 16:20:40.313892   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.313975   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.314100   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.314519   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.314634   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46697
	I0416 16:20:40.316643   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.316687   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.318455   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0416 16:20:40.317042   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.318081   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.318984   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.321074   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0416 16:20:40.320004   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.320507   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.322515   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.323863   11693 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0416 16:20:40.323883   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.325058   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0416 16:20:40.326990   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.327373   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I0416 16:20:40.327938   11693 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0416 16:20:40.327956   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0416 16:20:40.327973   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.327390   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0416 16:20:40.327793   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0416 16:20:40.328233   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.328456   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.329149   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.330057   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0416 16:20:40.331576   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0416 16:20:40.332754   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.332773   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.332755   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I0416 16:20:40.330659   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.332894   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.334551   11693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:20:40.333669   11693 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0416 16:20:40.335895   11693 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:20:40.335908   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:20:40.335915   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0416 16:20:40.335925   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.335934   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.333729   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46667
	I0416 16:20:40.333737   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.333758   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.336076   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.333798   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.333887   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.336156   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.334018   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.334765   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0416 16:20:40.335210   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I0416 16:20:40.336382   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.336514   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.336561   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.336570   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.336575   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.336633   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.336744   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.336833   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.336919   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.336926   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.337069   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.337518   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.337888   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.337904   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.337968   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.338234   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.338251   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.338699   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.338852   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.338916   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39561
	I0416 16:20:40.339049   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.339063   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.339325   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.339397   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.340016   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.340033   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.340103   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.342510   11693 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0416 16:20:40.340986   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.341342   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.341367   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.341380   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.342148   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.342148   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.342434   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.342721   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.342724   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.343108   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.344033   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0416 16:20:40.344095   11693 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 16:20:40.344104   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 16:20:40.344125   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.344172   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.344193   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.344242   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.344271   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.344536   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.346737   11693 out.go:177]   - Using image docker.io/registry:2.8.3
	I0416 16:20:40.344890   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.344910   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.344924   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.344954   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.345323   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.347218   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.347772   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.348207   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.348373   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.348410   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.348619   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.349439   11693 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0416 16:20:40.349459   11693 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0416 16:20:40.349478   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.349679   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.352256   11693 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0416 16:20:40.352267   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0416 16:20:40.352278   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.350887   11693 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0416 16:20:40.350909   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.350960   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.351098   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.351123   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.351160   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.351399   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.352045   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36067
	I0416 16:20:40.353820   11693 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0416 16:20:40.354643   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.355387   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.355621   11693 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0416 16:20:40.355632   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0416 16:20:40.355648   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.356104   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0416 16:20:40.356122   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.356716   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.358288   11693 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0416 16:20:40.356823   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.356862   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.356890   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.357114   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.359445   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.359616   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.359619   11693 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0416 16:20:40.359638   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0416 16:20:40.359652   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.359704   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.360212   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.360279   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.360291   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.360306   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.360312   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.360293   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.360512   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.360563   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.360677   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.360733   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.360766   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.361168   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.361430   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.361490   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.361509   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.361708   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.361885   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.362337   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:40.362380   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:40.362811   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.362836   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.363015   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.365026   11693 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0416 16:20:40.363436   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.363589   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.366532   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.366542   11693 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0416 16:20:40.366559   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0416 16:20:40.366576   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.367197   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.367459   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.367699   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0416 16:20:40.367710   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.368593   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.369134   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.369156   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.369522   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.370241   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.370246   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.370655   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.370677   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.370885   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.371081   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.371262   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.371419   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.371892   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.373571   11693 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0416 16:20:40.374872   11693 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0416 16:20:40.374891   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0416 16:20:40.374907   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.374589   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43841
	I0416 16:20:40.375325   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.376099   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.376118   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.376504   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.376743   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.377555   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.377991   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.378019   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.378312   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.378379   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.378521   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.378646   11693 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:20:40.378660   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:20:40.378674   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.378748   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.378863   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:40.381028   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.381320   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.381345   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.381490   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.381669   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.381801   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.381924   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	W0416 16:20:40.385144   11693 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39188->192.168.39.101:22: read: connection reset by peer
	I0416 16:20:40.385176   11693 retry.go:31] will retry after 215.949197ms: ssh: handshake failed: read tcp 192.168.39.1:39188->192.168.39.101:22: read: connection reset by peer
	I0416 16:20:40.385806   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0416 16:20:40.386170   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:40.386577   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:40.386598   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:40.386948   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:40.387118   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:40.388388   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:40.390199   11693 out.go:177]   - Using image docker.io/busybox:stable
	I0416 16:20:40.392144   11693 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0416 16:20:40.393790   11693 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0416 16:20:40.393812   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0416 16:20:40.393841   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:40.396970   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.397378   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:40.397418   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:40.397662   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:40.397888   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:40.398110   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:40.398313   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	W0416 16:20:40.399391   11693 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39200->192.168.39.101:22: read: connection reset by peer
	I0416 16:20:40.399414   11693 retry.go:31] will retry after 135.120692ms: ssh: handshake failed: read tcp 192.168.39.1:39200->192.168.39.101:22: read: connection reset by peer
	I0416 16:20:40.728410   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0416 16:20:40.732325   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0416 16:20:40.740734   11693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:20:40.740794   11693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:20:40.821243   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0416 16:20:40.832496   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0416 16:20:40.877242   11693 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0416 16:20:40.877274   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0416 16:20:40.993264   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0416 16:20:41.049959   11693 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0416 16:20:41.049988   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0416 16:20:41.067134   11693 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0416 16:20:41.067158   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0416 16:20:41.075639   11693 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 16:20:41.075666   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0416 16:20:41.084723   11693 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0416 16:20:41.084743   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0416 16:20:41.104300   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:20:41.110302   11693 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0416 16:20:41.110322   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0416 16:20:41.200807   11693 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0416 16:20:41.200828   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0416 16:20:41.235349   11693 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0416 16:20:41.235371   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0416 16:20:41.259441   11693 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0416 16:20:41.259457   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0416 16:20:41.282234   11693 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0416 16:20:41.282255   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0416 16:20:41.343128   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:20:41.369023   11693 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0416 16:20:41.369042   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0416 16:20:41.439091   11693 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 16:20:41.439122   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 16:20:41.449207   11693 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0416 16:20:41.449223   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0416 16:20:41.460586   11693 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0416 16:20:41.460600   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0416 16:20:41.513556   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0416 16:20:41.515903   11693 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0416 16:20:41.515922   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0416 16:20:41.568402   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0416 16:20:41.702592   11693 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0416 16:20:41.702620   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0416 16:20:41.744344   11693 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 16:20:41.744370   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 16:20:41.770484   11693 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0416 16:20:41.770505   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0416 16:20:41.919857   11693 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0416 16:20:41.919878   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0416 16:20:41.939244   11693 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0416 16:20:41.939267   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0416 16:20:42.017904   11693 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0416 16:20:42.017930   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0416 16:20:42.102922   11693 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0416 16:20:42.102946   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0416 16:20:42.212266   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 16:20:42.318375   11693 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0416 16:20:42.318400   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0416 16:20:42.369332   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0416 16:20:42.433016   11693 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:20:42.433042   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0416 16:20:42.448769   11693 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0416 16:20:42.448791   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0416 16:20:42.799524   11693 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0416 16:20:42.799550   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0416 16:20:42.966028   11693 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0416 16:20:42.966050   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0416 16:20:43.047236   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:20:43.162492   11693 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0416 16:20:43.162517   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0416 16:20:43.239104   11693 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0416 16:20:43.239124   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0416 16:20:43.366445   11693 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0416 16:20:43.366464   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0416 16:20:43.545555   11693 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0416 16:20:43.545587   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0416 16:20:43.623634   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0416 16:20:43.758079   11693 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0416 16:20:43.758104   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0416 16:20:44.114193   11693 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0416 16:20:44.114220   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0416 16:20:44.417028   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0416 16:20:46.743256   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.014812552s)
	I0416 16:20:46.743307   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:46.743310   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.010952236s)
	I0416 16:20:46.743322   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:46.743330   11693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.002511561s)
	I0416 16:20:46.743342   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:46.743352   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:46.743352   11693 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0416 16:20:46.743397   11693 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.002632978s)
	I0416 16:20:46.743446   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.922179892s)
	I0416 16:20:46.743467   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:46.743481   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:46.743582   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:46.743624   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:46.743632   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:46.743632   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:46.743641   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:46.743647   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:46.743656   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:46.743664   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:46.743671   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:46.743678   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:46.743766   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:46.743784   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:46.743794   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:46.743829   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:46.743962   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:46.744016   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:46.744041   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:46.744062   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:46.744075   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:46.744083   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:46.745645   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:46.745683   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:46.745706   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:46.757804   11693 node_ready.go:35] waiting up to 6m0s for node "addons-320546" to be "Ready" ...
	I0416 16:20:46.908986   11693 node_ready.go:49] node "addons-320546" has status "Ready":"True"
	I0416 16:20:46.909006   11693 node_ready.go:38] duration metric: took 151.147659ms for node "addons-320546" to be "Ready" ...
	I0416 16:20:46.909015   11693 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:20:46.974797   11693 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-69q4z" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:46.993549   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:46.993568   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:46.994011   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:46.994012   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:46.994042   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:47.065267   11693 pod_ready.go:92] pod "coredns-76f75df574-69q4z" in "kube-system" namespace has status "Ready":"True"
	I0416 16:20:47.065294   11693 pod_ready.go:81] duration metric: took 90.467818ms for pod "coredns-76f75df574-69q4z" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.065306   11693 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-lvqrq" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.163589   11693 pod_ready.go:92] pod "coredns-76f75df574-lvqrq" in "kube-system" namespace has status "Ready":"True"
	I0416 16:20:47.163609   11693 pod_ready.go:81] duration metric: took 98.295881ms for pod "coredns-76f75df574-lvqrq" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.163622   11693 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.169661   11693 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0416 16:20:47.169692   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:47.172469   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:47.172854   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:47.172879   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:47.173059   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:47.173261   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:47.173434   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:47.173582   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:47.274008   11693 pod_ready.go:92] pod "etcd-addons-320546" in "kube-system" namespace has status "Ready":"True"
	I0416 16:20:47.274033   11693 pod_ready.go:81] duration metric: took 110.403026ms for pod "etcd-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.274046   11693 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.283592   11693 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-320546" context rescaled to 1 replicas
	I0416 16:20:47.353704   11693 pod_ready.go:92] pod "kube-apiserver-addons-320546" in "kube-system" namespace has status "Ready":"True"
	I0416 16:20:47.353724   11693 pod_ready.go:81] duration metric: took 79.671157ms for pod "kube-apiserver-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.353734   11693 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.431001   11693 pod_ready.go:92] pod "kube-controller-manager-addons-320546" in "kube-system" namespace has status "Ready":"True"
	I0416 16:20:47.431033   11693 pod_ready.go:81] duration metric: took 77.291798ms for pod "kube-controller-manager-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.431050   11693 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vkm8w" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.590252   11693 pod_ready.go:92] pod "kube-proxy-vkm8w" in "kube-system" namespace has status "Ready":"True"
	I0416 16:20:47.590274   11693 pod_ready.go:81] duration metric: took 159.215627ms for pod "kube-proxy-vkm8w" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.590286   11693 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.950401   11693 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0416 16:20:47.963581   11693 pod_ready.go:92] pod "kube-scheduler-addons-320546" in "kube-system" namespace has status "Ready":"True"
	I0416 16:20:47.963605   11693 pod_ready.go:81] duration metric: took 373.310763ms for pod "kube-scheduler-addons-320546" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:47.963615   11693 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace to be "Ready" ...
	I0416 16:20:48.299368   11693 addons.go:234] Setting addon gcp-auth=true in "addons-320546"
	I0416 16:20:48.299422   11693 host.go:66] Checking if "addons-320546" exists ...
	I0416 16:20:48.299706   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:48.299732   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:48.315045   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0416 16:20:48.315487   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:48.315975   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:48.315998   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:48.316358   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:48.317011   11693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:20:48.317046   11693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:20:48.332274   11693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0416 16:20:48.332723   11693 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:20:48.333173   11693 main.go:141] libmachine: Using API Version  1
	I0416 16:20:48.333193   11693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:20:48.333563   11693 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:20:48.333770   11693 main.go:141] libmachine: (addons-320546) Calling .GetState
	I0416 16:20:48.335408   11693 main.go:141] libmachine: (addons-320546) Calling .DriverName
	I0416 16:20:48.335618   11693 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0416 16:20:48.335637   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHHostname
	I0416 16:20:48.338198   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:48.338628   11693 main.go:141] libmachine: (addons-320546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:f0:9d", ip: ""} in network mk-addons-320546: {Iface:virbr1 ExpiryTime:2024-04-16 17:20:01 +0000 UTC Type:0 Mac:52:54:00:c8:f0:9d Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:addons-320546 Clientid:01:52:54:00:c8:f0:9d}
	I0416 16:20:48.338656   11693 main.go:141] libmachine: (addons-320546) DBG | domain addons-320546 has defined IP address 192.168.39.101 and MAC address 52:54:00:c8:f0:9d in network mk-addons-320546
	I0416 16:20:48.338785   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHPort
	I0416 16:20:48.338926   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHKeyPath
	I0416 16:20:48.339089   11693 main.go:141] libmachine: (addons-320546) Calling .GetSSHUsername
	I0416 16:20:48.339209   11693 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/addons-320546/id_rsa Username:docker}
	I0416 16:20:50.014622   11693 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace has status "Ready":"False"
	I0416 16:20:50.135773   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.303232995s)
	I0416 16:20:50.135821   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.135834   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.135868   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.142571604s)
	I0416 16:20:50.135913   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.135927   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.135930   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.031601873s)
	I0416 16:20:50.135953   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.135962   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136044   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.792889152s)
	I0416 16:20:50.136084   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136095   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136106   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.622518157s)
	I0416 16:20:50.136131   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136147   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136190   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.567746675s)
	I0416 16:20:50.136212   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136221   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136294   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.923999545s)
	I0416 16:20:50.136307   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.136321   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136332   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136337   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.136343   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.136355   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.136366   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136376   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136388   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.136393   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.767031712s)
	I0416 16:20:50.136396   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.136409   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136345   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.136416   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136412   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136426   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136437   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136428   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136532   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.089258122s)
	W0416 16:20:50.136562   11693 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0416 16:20:50.136582   11693 retry.go:31] will retry after 300.188006ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0416 16:20:50.136645   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.512982592s)
	I0416 16:20:50.136660   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136669   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136744   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.136765   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.136772   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.136781   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136788   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.136824   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.136861   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.136868   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.136875   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.136882   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.137138   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.137150   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.138443   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.138476   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.138484   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.138663   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.138688   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.138695   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.138701   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.138707   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.138757   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.138777   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.138783   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.138789   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.138795   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.138817   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.138870   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.138882   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.138890   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.138898   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.138906   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.138962   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.138969   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.138976   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.138983   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.139019   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.139036   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.139042   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.139104   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.139143   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.139161   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.142711   11693 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-320546 service yakd-dashboard -n yakd-dashboard
	
	I0416 16:20:50.139395   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.139422   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.139434   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.140131   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.140154   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.140394   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.140416   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.141552   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.141575   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.142850   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.144391   11693 addons.go:470] Verifying addon metrics-server=true in "addons-320546"
	I0416 16:20:50.142859   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.142860   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.142871   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.144511   11693 addons.go:470] Verifying addon registry=true in "addons-320546"
	I0416 16:20:50.142874   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.144550   11693 addons.go:470] Verifying addon ingress=true in "addons-320546"
	I0416 16:20:50.145864   11693 out.go:177] * Verifying registry addon...
	I0416 16:20:50.147290   11693 out.go:177] * Verifying ingress addon...
	I0416 16:20:50.149630   11693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0416 16:20:50.149665   11693 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0416 16:20:50.156281   11693 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0416 16:20:50.156300   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:50.160526   11693 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0416 16:20:50.160551   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:50.170215   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:50.170235   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:50.170512   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:50.170530   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:50.170533   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:50.437331   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0416 16:20:50.656074   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:50.657812   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:51.153938   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:51.157348   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:51.656166   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:51.656343   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:52.169187   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:52.175562   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:52.471069   11693 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace has status "Ready":"False"
	I0416 16:20:52.663640   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:52.665957   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:53.174529   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:53.194317   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:53.275990   11693 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.940349187s)
	I0416 16:20:53.277649   11693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0416 16:20:53.275990   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.858912104s)
	I0416 16:20:53.277709   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:53.277747   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:53.276097   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.838721897s)
	I0416 16:20:53.277792   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:53.277817   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:53.278050   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:53.278081   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:53.279822   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:53.279830   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:53.279835   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:53.279849   11693 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0416 16:20:53.281572   11693 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0416 16:20:53.281587   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0416 16:20:53.280155   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:53.281657   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:53.281673   11693 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-320546"
	I0416 16:20:53.280156   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:53.280174   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:53.280203   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:53.283040   11693 out.go:177] * Verifying csi-hostpath-driver addon...
	I0416 16:20:53.283070   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:53.284600   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:53.284620   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:53.284913   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:53.284929   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:53.285317   11693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0416 16:20:53.309623   11693 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0416 16:20:53.309645   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:53.379719   11693 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0416 16:20:53.379749   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0416 16:20:53.419041   11693 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0416 16:20:53.419067   11693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0416 16:20:53.578192   11693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0416 16:20:53.655199   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:53.655286   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:53.792522   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:54.162049   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:54.167213   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:54.291613   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:54.686989   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:54.687479   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:54.809583   11693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.231332388s)
	I0416 16:20:54.809654   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:54.809674   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:54.810067   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:54.810086   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:54.810101   11693 main.go:141] libmachine: Making call to close driver server
	I0416 16:20:54.810109   11693 main.go:141] libmachine: (addons-320546) Calling .Close
	I0416 16:20:54.810135   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:54.810375   11693 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:20:54.810389   11693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:20:54.810388   11693 main.go:141] libmachine: (addons-320546) DBG | Closing plugin on server side
	I0416 16:20:54.811682   11693 addons.go:470] Verifying addon gcp-auth=true in "addons-320546"
	I0416 16:20:54.813373   11693 out.go:177] * Verifying gcp-auth addon...
	I0416 16:20:54.815277   11693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0416 16:20:54.840280   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:54.845874   11693 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0416 16:20:54.845891   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:54.973814   11693 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace has status "Ready":"False"
	I0416 16:20:55.163537   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:55.164468   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:55.290984   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:55.320709   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:55.658569   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:55.663357   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:55.790738   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:55.819519   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:56.153971   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:56.154544   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:56.292568   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:56.320305   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:56.654817   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:56.655201   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:56.791534   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:56.819368   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:57.155384   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:57.157632   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:57.291603   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:57.319840   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:57.469878   11693 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace has status "Ready":"False"
	I0416 16:20:57.658198   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:57.661078   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:57.791720   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:57.821571   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:58.169208   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:58.172211   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:58.295226   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:58.320020   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:58.659008   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:58.662107   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:58.791077   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:58.819134   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:59.155217   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:59.155218   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:59.292155   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:59.321497   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:20:59.703395   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:20:59.704002   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:20:59.717505   11693 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace has status "Ready":"False"
	I0416 16:20:59.793971   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:20:59.822810   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:00.156184   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:00.157759   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:00.291678   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:00.322279   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:00.654560   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:00.654916   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:00.792015   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:00.823565   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:00.971290   11693 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace has status "Ready":"True"
	I0416 16:21:00.971317   11693 pod_ready.go:81] duration metric: took 13.007694191s for pod "nvidia-device-plugin-daemonset-h7wqn" in "kube-system" namespace to be "Ready" ...
	I0416 16:21:00.971338   11693 pod_ready.go:38] duration metric: took 14.062312976s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:21:00.971356   11693 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:21:00.971409   11693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:21:00.991939   11693 api_server.go:72] duration metric: took 20.783176232s to wait for apiserver process to appear ...
	I0416 16:21:00.991960   11693 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:21:00.991980   11693 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0416 16:21:01.000149   11693 api_server.go:279] https://192.168.39.101:8443/healthz returned 200:
	ok
	I0416 16:21:01.001480   11693 api_server.go:141] control plane version: v1.29.3
	I0416 16:21:01.001500   11693 api_server.go:131] duration metric: took 9.533853ms to wait for apiserver health ...
	I0416 16:21:01.001507   11693 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:21:01.014368   11693 system_pods.go:59] 18 kube-system pods found
	I0416 16:21:01.014395   11693 system_pods.go:61] "coredns-76f75df574-69q4z" [f3aa8aee-4f59-444f-998b-51d1069b4b2f] Running
	I0416 16:21:01.014402   11693 system_pods.go:61] "csi-hostpath-attacher-0" [e6aa7ab0-a826-4e5b-b825-ddaaa156b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0416 16:21:01.014408   11693 system_pods.go:61] "csi-hostpath-resizer-0" [edaac848-64a0-46bf-87bc-c7a476c5f619] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0416 16:21:01.014417   11693 system_pods.go:61] "csi-hostpathplugin-lgr25" [12cda063-1213-4803-9cc2-e992215d9225] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0416 16:21:01.014425   11693 system_pods.go:61] "etcd-addons-320546" [6e9cffb7-7a6e-4bf0-8797-bd5bae4d4164] Running
	I0416 16:21:01.014430   11693 system_pods.go:61] "kube-apiserver-addons-320546" [b229c351-a3a2-4fa1-a2ce-d5502df83d02] Running
	I0416 16:21:01.014434   11693 system_pods.go:61] "kube-controller-manager-addons-320546" [2b0135e4-18c4-4ed1-817b-8bc83fc6844d] Running
	I0416 16:21:01.014438   11693 system_pods.go:61] "kube-ingress-dns-minikube" [9581dcbf-6a10-463a-bfda-8e35065cd1df] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0416 16:21:01.014442   11693 system_pods.go:61] "kube-proxy-vkm8w" [adbf2174-84c1-4d5e-92f6-fa177c06a454] Running
	I0416 16:21:01.014446   11693 system_pods.go:61] "kube-scheduler-addons-320546" [51a7e025-7b1c-4383-950b-6133e8ce64a7] Running
	I0416 16:21:01.014451   11693 system_pods.go:61] "metrics-server-75d6c48ddd-9ncdk" [ba7c9057-48bc-4693-ada5-ae248b38140a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 16:21:01.014458   11693 system_pods.go:61] "nvidia-device-plugin-daemonset-h7wqn" [7c8cc092-7db5-49a3-88fa-480f2ecee1b3] Running
	I0416 16:21:01.014464   11693 system_pods.go:61] "registry-proxy-xkgds" [232b8056-a4f1-4480-be41-acd884e1691e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0416 16:21:01.014469   11693 system_pods.go:61] "registry-rl7f5" [1c0770e4-b4b2-4e20-b112-f4222e84b5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0416 16:21:01.014477   11693 system_pods.go:61] "snapshot-controller-58dbcc7b99-65ct5" [c9b785b6-63f5-49aa-82dc-26f4bcb0057e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:01.014482   11693 system_pods.go:61] "snapshot-controller-58dbcc7b99-d8xn5" [082e757d-e3c7-45c0-8a56-b77b9a274a50] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:01.014489   11693 system_pods.go:61] "storage-provisioner" [6312df28-a808-4bc9-a458-fdeefa768264] Running
	I0416 16:21:01.014495   11693 system_pods.go:61] "tiller-deploy-7b677967b9-82r9t" [94a681d5-055b-449c-941b-808c08e30de3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0416 16:21:01.014503   11693 system_pods.go:74] duration metric: took 12.991702ms to wait for pod list to return data ...
	I0416 16:21:01.014513   11693 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:21:01.016354   11693 default_sa.go:45] found service account: "default"
	I0416 16:21:01.016370   11693 default_sa.go:55] duration metric: took 1.851806ms for default service account to be created ...
	I0416 16:21:01.016376   11693 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:21:01.029014   11693 system_pods.go:86] 18 kube-system pods found
	I0416 16:21:01.029042   11693 system_pods.go:89] "coredns-76f75df574-69q4z" [f3aa8aee-4f59-444f-998b-51d1069b4b2f] Running
	I0416 16:21:01.029053   11693 system_pods.go:89] "csi-hostpath-attacher-0" [e6aa7ab0-a826-4e5b-b825-ddaaa156b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0416 16:21:01.029064   11693 system_pods.go:89] "csi-hostpath-resizer-0" [edaac848-64a0-46bf-87bc-c7a476c5f619] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0416 16:21:01.029076   11693 system_pods.go:89] "csi-hostpathplugin-lgr25" [12cda063-1213-4803-9cc2-e992215d9225] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0416 16:21:01.029085   11693 system_pods.go:89] "etcd-addons-320546" [6e9cffb7-7a6e-4bf0-8797-bd5bae4d4164] Running
	I0416 16:21:01.029094   11693 system_pods.go:89] "kube-apiserver-addons-320546" [b229c351-a3a2-4fa1-a2ce-d5502df83d02] Running
	I0416 16:21:01.029103   11693 system_pods.go:89] "kube-controller-manager-addons-320546" [2b0135e4-18c4-4ed1-817b-8bc83fc6844d] Running
	I0416 16:21:01.029112   11693 system_pods.go:89] "kube-ingress-dns-minikube" [9581dcbf-6a10-463a-bfda-8e35065cd1df] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0416 16:21:01.029128   11693 system_pods.go:89] "kube-proxy-vkm8w" [adbf2174-84c1-4d5e-92f6-fa177c06a454] Running
	I0416 16:21:01.029142   11693 system_pods.go:89] "kube-scheduler-addons-320546" [51a7e025-7b1c-4383-950b-6133e8ce64a7] Running
	I0416 16:21:01.029151   11693 system_pods.go:89] "metrics-server-75d6c48ddd-9ncdk" [ba7c9057-48bc-4693-ada5-ae248b38140a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 16:21:01.029161   11693 system_pods.go:89] "nvidia-device-plugin-daemonset-h7wqn" [7c8cc092-7db5-49a3-88fa-480f2ecee1b3] Running
	I0416 16:21:01.029172   11693 system_pods.go:89] "registry-proxy-xkgds" [232b8056-a4f1-4480-be41-acd884e1691e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0416 16:21:01.029181   11693 system_pods.go:89] "registry-rl7f5" [1c0770e4-b4b2-4e20-b112-f4222e84b5a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0416 16:21:01.029187   11693 system_pods.go:89] "snapshot-controller-58dbcc7b99-65ct5" [c9b785b6-63f5-49aa-82dc-26f4bcb0057e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:01.029193   11693 system_pods.go:89] "snapshot-controller-58dbcc7b99-d8xn5" [082e757d-e3c7-45c0-8a56-b77b9a274a50] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0416 16:21:01.029200   11693 system_pods.go:89] "storage-provisioner" [6312df28-a808-4bc9-a458-fdeefa768264] Running
	I0416 16:21:01.029209   11693 system_pods.go:89] "tiller-deploy-7b677967b9-82r9t" [94a681d5-055b-449c-941b-808c08e30de3] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0416 16:21:01.029221   11693 system_pods.go:126] duration metric: took 12.838949ms to wait for k8s-apps to be running ...
	I0416 16:21:01.029233   11693 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:21:01.029283   11693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:21:01.053648   11693 system_svc.go:56] duration metric: took 24.40803ms WaitForService to wait for kubelet
	I0416 16:21:01.053675   11693 kubeadm.go:576] duration metric: took 20.844914528s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:21:01.053695   11693 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:21:01.057089   11693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:21:01.057113   11693 node_conditions.go:123] node cpu capacity is 2
	I0416 16:21:01.057140   11693 node_conditions.go:105] duration metric: took 3.438882ms to run NodePressure ...
	I0416 16:21:01.057155   11693 start.go:240] waiting for startup goroutines ...
	I0416 16:21:01.154970   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:01.155492   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:01.291941   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:01.319061   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:01.654159   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:01.655641   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:01.791271   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:01.821354   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:02.155402   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:02.162578   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:02.291932   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:02.320831   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:02.654194   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:02.655251   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:02.798810   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:02.819656   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:03.155492   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:03.155541   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:03.292468   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:03.322451   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:03.656571   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:03.657286   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:03.790955   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:03.819808   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:04.154984   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:04.157065   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:04.290704   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:04.319525   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:04.655539   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:04.656008   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:04.792046   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:04.820069   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:05.155314   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:05.155537   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:05.291372   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:05.319581   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:05.656600   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:05.656753   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:05.791491   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:05.818504   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:06.156264   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:06.157842   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:06.290577   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:06.319493   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:06.654942   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:06.656963   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:06.792384   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:06.819864   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:07.156023   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:07.156340   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:07.292015   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:07.319423   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:07.655645   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:07.664902   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:07.791760   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:07.819733   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:08.156740   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:08.159089   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:08.291538   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:08.319580   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:08.656827   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:08.657475   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:08.791188   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:08.819785   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:09.158171   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:09.159727   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:09.292616   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:09.322208   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:09.655356   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:09.655501   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:09.791432   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:09.819618   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:10.158605   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:10.159147   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:10.292177   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:10.319645   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:10.656048   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:10.660340   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:10.791755   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:10.819430   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:11.157058   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:11.157468   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:11.291574   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:11.318900   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:11.656984   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:11.658775   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:11.791332   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:11.822044   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:12.155388   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:12.155620   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:12.291357   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:12.318811   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:12.655290   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:12.660094   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:12.791461   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:12.819472   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:13.157205   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:13.158527   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.291614   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:13.319089   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:13.656088   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:13.656204   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:13.791400   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:13.818529   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:14.155610   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:14.155762   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:14.291095   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:14.319628   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:14.679396   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:14.679537   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:14.791449   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:14.820955   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:15.157056   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:15.157728   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:15.291009   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:15.319577   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:15.657840   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:15.665220   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:15.792493   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:15.818924   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:16.155007   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:16.156287   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.290925   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:16.318433   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:16.656018   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:16.658476   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:16.798139   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:16.826650   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:17.156282   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:17.156614   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.294063   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:17.321347   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:17.654966   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:17.658461   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:17.794228   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:17.819075   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:18.155869   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:18.156466   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:18.292824   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:18.326042   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:18.656695   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:18.656760   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:18.801671   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:18.820703   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:19.155755   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:19.157533   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:19.292518   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:19.321848   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:19.655904   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:19.659373   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:19.791918   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:19.821625   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:20.155604   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:20.156336   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:20.291635   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:20.319320   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:20.656516   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:20.662593   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:21.050301   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:21.052757   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:21.156114   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:21.157627   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:21.292166   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:21.319746   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:21.657952   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:21.658421   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:21.791969   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:21.820882   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:22.158073   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:22.158240   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:22.291328   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:22.320630   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:22.665194   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:22.665505   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:22.793575   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:22.822969   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:23.163558   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:23.165291   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:23.291392   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:23.319394   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:23.656543   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:23.660442   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:23.792627   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:23.818935   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:24.156933   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:24.163218   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:24.295134   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:24.320432   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:24.655611   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:24.656785   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:24.796220   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:24.825151   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:25.420776   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:25.424313   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:25.424368   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:25.425311   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:25.656675   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:25.657684   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:25.792485   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:25.821972   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:26.155570   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:26.156211   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:26.302529   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:26.326931   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:26.663196   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:26.670489   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:26.792028   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:26.819736   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:27.155724   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.160321   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:27.443040   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:27.445503   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:27.659525   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:27.659836   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:27.790423   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:27.819973   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:28.155401   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:28.156725   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.291794   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:28.319592   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:28.658501   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:28.658580   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:28.792259   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:28.821746   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:29.156601   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:29.157322   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:29.290520   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:29.319074   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:29.655599   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:29.656678   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:29.790655   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:29.819550   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:30.165076   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:30.166186   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:30.306815   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:30.320622   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:30.656464   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:30.656782   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:30.791279   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:30.819766   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:31.155354   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:31.158819   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:31.292466   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:31.318460   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:31.657126   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:31.661275   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:31.791761   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:31.819293   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:32.156176   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:32.156485   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:32.291690   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:32.319802   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:32.655207   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:32.656407   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:32.793744   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:32.819189   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:33.156458   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:33.156800   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:33.292250   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:33.319891   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:33.769967   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:33.769994   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:33.796486   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:33.819265   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:34.154800   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:34.155584   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:34.291020   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:34.332547   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:34.658610   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:34.661743   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:34.793078   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:34.819696   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:35.156325   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:35.157298   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:35.290838   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:35.319562   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:35.659155   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:35.659506   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:35.792468   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:35.825613   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:36.157572   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:36.158362   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0416 16:21:36.291447   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:36.322849   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:36.654979   11693 kapi.go:107] duration metric: took 46.505346585s to wait for kubernetes.io/minikube-addons=registry ...
	I0416 16:21:36.657247   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:36.792189   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:36.821398   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:37.156913   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:37.291738   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:37.321358   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:37.662606   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:37.792955   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:37.819635   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:38.154940   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:38.292687   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:38.320132   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:38.654597   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:38.792156   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:38.819724   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:39.155157   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:39.293347   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:39.324396   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:39.654909   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:39.793117   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:39.821011   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:40.155673   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:40.292440   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:40.326494   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:40.655383   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:40.791695   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:40.819664   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.156391   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:41.290993   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:41.319805   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.991867   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:41.992753   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:41.992853   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:42.164377   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:42.295447   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:42.318685   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:42.660395   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:42.792759   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:42.819698   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:43.155998   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:43.291075   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:43.318556   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:43.657412   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:43.791491   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:43.821004   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:44.154886   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:44.292447   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:44.318778   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:44.655528   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:44.791002   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:44.821064   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:45.154785   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:45.297519   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:45.321368   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:45.654943   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:45.802595   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:45.820179   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:46.155508   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:46.302597   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:46.319091   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:46.654076   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:46.791707   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:46.828544   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:47.155168   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:47.292397   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:47.321138   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:47.655054   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:47.795136   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:47.834491   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:48.155494   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:48.291265   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:48.319036   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:48.654854   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:48.791409   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:48.819306   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:49.289836   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:49.297369   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:49.322742   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:49.654240   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:49.791177   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:49.818820   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:50.157115   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:50.299213   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:50.320503   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:50.655051   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:50.796945   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:50.821049   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:51.155531   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:51.291208   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:51.319796   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:51.654473   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:51.791438   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:51.819860   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:52.205962   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:52.291645   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:52.322001   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:52.654710   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:52.793711   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:52.820012   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:53.154621   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:53.292307   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:53.320112   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:53.654706   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:53.792471   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:53.820226   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:54.154877   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:54.291901   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0416 16:21:54.319434   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:54.655236   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:54.791288   11693 kapi.go:107] duration metric: took 1m1.505968955s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0416 16:21:54.822617   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:55.154809   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:55.319123   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:55.653978   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:55.818907   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:56.154215   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:56.322151   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.074262   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.074648   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:57.154940   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:57.319996   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:57.654551   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:57.819656   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:58.154044   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:58.319697   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:58.653972   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:58.819250   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:59.154926   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:59.319057   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:21:59.661923   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:21:59.820280   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:00.440642   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:00.442752   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:00.655002   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:00.820757   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:01.154242   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:01.319602   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:01.654163   11693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0416 16:22:01.829512   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:02.155299   11693 kapi.go:107] duration metric: took 1m12.005631903s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0416 16:22:02.319697   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:02.820069   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:03.319617   11693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0416 16:22:03.819688   11693 kapi.go:107] duration metric: took 1m9.004410252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0416 16:22:03.821468   11693 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-320546 cluster.
	I0416 16:22:03.823087   11693 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0416 16:22:03.824595   11693 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0416 16:22:03.826150   11693 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner-rancher, helm-tiller, nvidia-device-plugin, yakd, metrics-server, storage-provisioner, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0416 16:22:03.827639   11693 addons.go:505] duration metric: took 1m23.61885184s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner-rancher helm-tiller nvidia-device-plugin yakd metrics-server storage-provisioner inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0416 16:22:03.827679   11693 start.go:245] waiting for cluster config update ...
	I0416 16:22:03.827695   11693 start.go:254] writing updated cluster config ...
	I0416 16:22:03.827932   11693 ssh_runner.go:195] Run: rm -f paused
	I0416 16:22:03.878160   11693 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 16:22:03.880177   11693 out.go:177] * Done! kubectl is now configured to use "addons-320546" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.344299706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713284713344275274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7bc944a-d8e8-4e13-b8ac-d62dfe79edcb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.344946323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d351689-d4d8-46ad-afba-ec4e24af3f40 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.345035608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d351689-d4d8-46ad-afba-ec4e24af3f40 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.345337755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9401ff45b5d11283c5c59ae7f167f1f594bf77879147bf8ee4321d8a275890e4,PodSandboxId:5bc38316d925ee330817fe127dfc5bf1479b1b3be27564b0cb8a9021c1787988,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713284705329659914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2dnfr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ac1098f-abbb-402f-ae6e-dfe5334735ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8d04598f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896f1ee148c27db5403d97a1be3fd2817d3837c2e008dc606cb27ecfc36355c,PodSandboxId:a1cc71ec3d2931964232fef58da3e9c28f77d624d0ef15dc285244a190b67351,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713284563119900593,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddf62631-814a-41e0-96ed-ec74b1056618,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c3cccb4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fb530ad64c093b25199ad71960a382f17a6948ad1fdc280b922a5de98aa1a2,PodSandboxId:205342eeb83082fc24d1b2247b274faf11a6986b67f0be1ae1d26ed6d981f095,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713284561112499530,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jpnf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 1a3eee24-c904-4664-822f-114064b24f70,},Annotations:map[string]string{io.kubernetes.container.hash: 3770ef7b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc70e0c152569f61497ab49f7123a8e4edb0f4a2ed195d3b3adbe33a2e7f5d61,PodSandboxId:c53001014bdc9d0155e824cd255fdddc96c7ae4bc070c62b86ee4ac2bf465308,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713284523176094528,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-tg942,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c9e3b77c-17d5-4889-bc47-7cb21cbbdff4,},Annotations:map[string]string{io.kubernetes.container.hash: 71abd316,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd835adb6172b7d84080d851be26632a2c6096247c2edfbd75b6715065730b8,PodSandboxId:2b434e9457079983abc0b8c7d6fa01ad208544772440cf9cea2707beae7ae188,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713284500652701253,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xl29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcb9cbf3-8cf2-4652-aceb-874934987dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 996fd009,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b151b45103436f0d6fa8d418a31137e2778913ef939c8b9e7b4b70e072d2c4b,PodSandboxId:9eb79a003bd7f16526ea867796b63b575238d923bf9e595945985746a71fee58,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713284499944754720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vdlvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 89a78b50-db68-4b4d-a279-a651004f9b53,},Annotations:map[string]string{io.kubernetes.container.hash: cd9a09ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df4bf9a048912d7b7331a47d7f6ad64faf70474c6fb526c15a85750d828a6e5,PodSandboxId:93c359a1c11f0459a779c9b7ca14bd3bb9c8f5146ce8ecc8d7e66b762c530dd8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713284490144383124,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-w94qg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 208a058d-7e12-4509-b543-5e14c69bcf86,},Annotations:map[string]string{io.kubernetes.container.hash: 7fc6d6dc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b6a9cb14c99ae2ee1cf599b601f1bd6cd9740f7a1db3283a5356141dae9d3,PodSandboxId:ab9aabf53cdf080c936c507966ec3eb048c62a036d77ac83a885b4f7ebc5f405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713284449388427282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6312df28-a808-4bc9-a458-fdeefa768264,},Annotations:map[string]string{io.kubernetes.container.hash: fdc9d727,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7eb647612d34be2541874b5f2af3d9972ee7701ba0f9c678e2e331039d932a4,PodSandboxId:04f694d2fd7944cadd997b2e5b62942042bc9a0bf08fae5774222a9321cb5398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713284444528890228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-69q4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3aa8aee-4f59-444f-998b-51d1069b4b2f,},Annotations:map[string]string{io.kubernetes.container.hash: a9fde3f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c5ebbb9b27ea7396c30d95e1227a3c09d62da4728426c9d864d4f2ed9975ef,PodSandboxId:4c6ef2e751fe080890d1f1af47511f1d53f075965d1f0eb715aec72524e120c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713284441563989092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkm8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbf2174-84c1-4d5e-92f6-fa177c06a454,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae3447f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05f9380451a2ffd3f1edb0fcc54e1ecdb3934158577e5cbba2fb83e37ee5fd,PodSandboxId:c85f0187a7817172e4c620b8badb99e09f2b84107269a83e20923518aa4cbe34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c39
0d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713284421946946579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9c888cb29cd726e8c3c1fcd3a30a108,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4230602147e4651f3a1525c87ee453684e63cd20d72e538f9fcec32778f4279a,PodSandboxId:66f1fb983c459f4ea7acefc00da6e7a92711b6df9f0bce549a98dbca64a9f9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f9
7387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713284421957135124,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a324aab73278545cee719b1fcca93b3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc32f58cf825ccdbbf259eff3bb3e6200518ff2c64f809d5c9a2e6f37eb403f6,PodSandboxId:05b1ba8dcbd88fcb3ba5bc206f2da14733df44a779d3b63a5fd52a4e12a94108,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713284421899287334,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f29d82586b2f96163d64927fe37a1db,},Annotations:map[string]string{io.kubernetes.container.hash: af1f417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc018dd3206a6464badea06cd3e59181ccc6f140e466f7cfd874ddf78ffeac90,PodSandboxId:0c5955052bf291491ad5f9a2ddb5d1ddcc3cab193db3fa3645b2f4f2286cd224,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8ef
aab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713284421860962524,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbd326adaab28fe35f8c5d387a0cb69,},Annotations:map[string]string{io.kubernetes.container.hash: e0314044,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d351689-d4d8-46ad-afba-ec4e24af3f40 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.390260750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aab74dc4-0d04-4441-9ea6-cf3093f5db08 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.390354920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aab74dc4-0d04-4441-9ea6-cf3093f5db08 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.392113814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=145a5347-def4-4e38-b1e5-9e2c3f3a0be3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.393784463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713284713393757725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=145a5347-def4-4e38-b1e5-9e2c3f3a0be3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.394718605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95fcb96b-7caa-4af7-b045-59031246cd5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.394797463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95fcb96b-7caa-4af7-b045-59031246cd5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.395114810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9401ff45b5d11283c5c59ae7f167f1f594bf77879147bf8ee4321d8a275890e4,PodSandboxId:5bc38316d925ee330817fe127dfc5bf1479b1b3be27564b0cb8a9021c1787988,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713284705329659914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2dnfr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ac1098f-abbb-402f-ae6e-dfe5334735ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8d04598f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896f1ee148c27db5403d97a1be3fd2817d3837c2e008dc606cb27ecfc36355c,PodSandboxId:a1cc71ec3d2931964232fef58da3e9c28f77d624d0ef15dc285244a190b67351,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713284563119900593,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddf62631-814a-41e0-96ed-ec74b1056618,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c3cccb4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fb530ad64c093b25199ad71960a382f17a6948ad1fdc280b922a5de98aa1a2,PodSandboxId:205342eeb83082fc24d1b2247b274faf11a6986b67f0be1ae1d26ed6d981f095,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713284561112499530,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jpnf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 1a3eee24-c904-4664-822f-114064b24f70,},Annotations:map[string]string{io.kubernetes.container.hash: 3770ef7b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc70e0c152569f61497ab49f7123a8e4edb0f4a2ed195d3b3adbe33a2e7f5d61,PodSandboxId:c53001014bdc9d0155e824cd255fdddc96c7ae4bc070c62b86ee4ac2bf465308,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713284523176094528,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-tg942,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c9e3b77c-17d5-4889-bc47-7cb21cbbdff4,},Annotations:map[string]string{io.kubernetes.container.hash: 71abd316,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd835adb6172b7d84080d851be26632a2c6096247c2edfbd75b6715065730b8,PodSandboxId:2b434e9457079983abc0b8c7d6fa01ad208544772440cf9cea2707beae7ae188,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713284500652701253,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xl29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcb9cbf3-8cf2-4652-aceb-874934987dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 996fd009,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b151b45103436f0d6fa8d418a31137e2778913ef939c8b9e7b4b70e072d2c4b,PodSandboxId:9eb79a003bd7f16526ea867796b63b575238d923bf9e595945985746a71fee58,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713284499944754720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vdlvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 89a78b50-db68-4b4d-a279-a651004f9b53,},Annotations:map[string]string{io.kubernetes.container.hash: cd9a09ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df4bf9a048912d7b7331a47d7f6ad64faf70474c6fb526c15a85750d828a6e5,PodSandboxId:93c359a1c11f0459a779c9b7ca14bd3bb9c8f5146ce8ecc8d7e66b762c530dd8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713284490144383124,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-w94qg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 208a058d-7e12-4509-b543-5e14c69bcf86,},Annotations:map[string]string{io.kubernetes.container.hash: 7fc6d6dc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b6a9cb14c99ae2ee1cf599b601f1bd6cd9740f7a1db3283a5356141dae9d3,PodSandboxId:ab9aabf53cdf080c936c507966ec3eb048c62a036d77ac83a885b4f7ebc5f405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713284449388427282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6312df28-a808-4bc9-a458-fdeefa768264,},Annotations:map[string]string{io.kubernetes.container.hash: fdc9d727,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7eb647612d34be2541874b5f2af3d9972ee7701ba0f9c678e2e331039d932a4,PodSandboxId:04f694d2fd7944cadd997b2e5b62942042bc9a0bf08fae5774222a9321cb5398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713284444528890228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-69q4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3aa8aee-4f59-444f-998b-51d1069b4b2f,},Annotations:map[string]string{io.kubernetes.container.hash: a9fde3f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c5ebbb9b27ea7396c30d95e1227a3c09d62da4728426c9d864d4f2ed9975ef,PodSandboxId:4c6ef2e751fe080890d1f1af47511f1d53f075965d1f0eb715aec72524e120c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713284441563989092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkm8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbf2174-84c1-4d5e-92f6-fa177c06a454,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae3447f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05f9380451a2ffd3f1edb0fcc54e1ecdb3934158577e5cbba2fb83e37ee5fd,PodSandboxId:c85f0187a7817172e4c620b8badb99e09f2b84107269a83e20923518aa4cbe34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c39
0d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713284421946946579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9c888cb29cd726e8c3c1fcd3a30a108,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4230602147e4651f3a1525c87ee453684e63cd20d72e538f9fcec32778f4279a,PodSandboxId:66f1fb983c459f4ea7acefc00da6e7a92711b6df9f0bce549a98dbca64a9f9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f9
7387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713284421957135124,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a324aab73278545cee719b1fcca93b3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc32f58cf825ccdbbf259eff3bb3e6200518ff2c64f809d5c9a2e6f37eb403f6,PodSandboxId:05b1ba8dcbd88fcb3ba5bc206f2da14733df44a779d3b63a5fd52a4e12a94108,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713284421899287334,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f29d82586b2f96163d64927fe37a1db,},Annotations:map[string]string{io.kubernetes.container.hash: af1f417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc018dd3206a6464badea06cd3e59181ccc6f140e466f7cfd874ddf78ffeac90,PodSandboxId:0c5955052bf291491ad5f9a2ddb5d1ddcc3cab193db3fa3645b2f4f2286cd224,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8ef
aab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713284421860962524,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbd326adaab28fe35f8c5d387a0cb69,},Annotations:map[string]string{io.kubernetes.container.hash: e0314044,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95fcb96b-7caa-4af7-b045-59031246cd5f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.431456363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7530453-5377-4999-b577-fcf8e7df0714 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.431611978Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7530453-5377-4999-b577-fcf8e7df0714 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.432823306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a0b7681-0c06-4984-b433-37b73afbc216 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.434675493Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713284713434650949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a0b7681-0c06-4984-b433-37b73afbc216 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.435292465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d35da996-fb47-495b-a740-ecd48848e38b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.435377813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d35da996-fb47-495b-a740-ecd48848e38b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.435759314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9401ff45b5d11283c5c59ae7f167f1f594bf77879147bf8ee4321d8a275890e4,PodSandboxId:5bc38316d925ee330817fe127dfc5bf1479b1b3be27564b0cb8a9021c1787988,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713284705329659914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2dnfr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ac1098f-abbb-402f-ae6e-dfe5334735ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8d04598f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896f1ee148c27db5403d97a1be3fd2817d3837c2e008dc606cb27ecfc36355c,PodSandboxId:a1cc71ec3d2931964232fef58da3e9c28f77d624d0ef15dc285244a190b67351,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713284563119900593,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddf62631-814a-41e0-96ed-ec74b1056618,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c3cccb4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fb530ad64c093b25199ad71960a382f17a6948ad1fdc280b922a5de98aa1a2,PodSandboxId:205342eeb83082fc24d1b2247b274faf11a6986b67f0be1ae1d26ed6d981f095,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713284561112499530,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jpnf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 1a3eee24-c904-4664-822f-114064b24f70,},Annotations:map[string]string{io.kubernetes.container.hash: 3770ef7b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc70e0c152569f61497ab49f7123a8e4edb0f4a2ed195d3b3adbe33a2e7f5d61,PodSandboxId:c53001014bdc9d0155e824cd255fdddc96c7ae4bc070c62b86ee4ac2bf465308,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713284523176094528,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-tg942,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c9e3b77c-17d5-4889-bc47-7cb21cbbdff4,},Annotations:map[string]string{io.kubernetes.container.hash: 71abd316,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd835adb6172b7d84080d851be26632a2c6096247c2edfbd75b6715065730b8,PodSandboxId:2b434e9457079983abc0b8c7d6fa01ad208544772440cf9cea2707beae7ae188,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713284500652701253,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xl29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcb9cbf3-8cf2-4652-aceb-874934987dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 996fd009,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b151b45103436f0d6fa8d418a31137e2778913ef939c8b9e7b4b70e072d2c4b,PodSandboxId:9eb79a003bd7f16526ea867796b63b575238d923bf9e595945985746a71fee58,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713284499944754720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vdlvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 89a78b50-db68-4b4d-a279-a651004f9b53,},Annotations:map[string]string{io.kubernetes.container.hash: cd9a09ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df4bf9a048912d7b7331a47d7f6ad64faf70474c6fb526c15a85750d828a6e5,PodSandboxId:93c359a1c11f0459a779c9b7ca14bd3bb9c8f5146ce8ecc8d7e66b762c530dd8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713284490144383124,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-w94qg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 208a058d-7e12-4509-b543-5e14c69bcf86,},Annotations:map[string]string{io.kubernetes.container.hash: 7fc6d6dc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b6a9cb14c99ae2ee1cf599b601f1bd6cd9740f7a1db3283a5356141dae9d3,PodSandboxId:ab9aabf53cdf080c936c507966ec3eb048c62a036d77ac83a885b4f7ebc5f405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713284449388427282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6312df28-a808-4bc9-a458-fdeefa768264,},Annotations:map[string]string{io.kubernetes.container.hash: fdc9d727,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7eb647612d34be2541874b5f2af3d9972ee7701ba0f9c678e2e331039d932a4,PodSandboxId:04f694d2fd7944cadd997b2e5b62942042bc9a0bf08fae5774222a9321cb5398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713284444528890228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-69q4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3aa8aee-4f59-444f-998b-51d1069b4b2f,},Annotations:map[string]string{io.kubernetes.container.hash: a9fde3f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c5ebbb9b27ea7396c30d95e1227a3c09d62da4728426c9d864d4f2ed9975ef,PodSandboxId:4c6ef2e751fe080890d1f1af47511f1d53f075965d1f0eb715aec72524e120c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713284441563989092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkm8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbf2174-84c1-4d5e-92f6-fa177c06a454,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae3447f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05f9380451a2ffd3f1edb0fcc54e1ecdb3934158577e5cbba2fb83e37ee5fd,PodSandboxId:c85f0187a7817172e4c620b8badb99e09f2b84107269a83e20923518aa4cbe34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c39
0d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713284421946946579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9c888cb29cd726e8c3c1fcd3a30a108,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4230602147e4651f3a1525c87ee453684e63cd20d72e538f9fcec32778f4279a,PodSandboxId:66f1fb983c459f4ea7acefc00da6e7a92711b6df9f0bce549a98dbca64a9f9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f9
7387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713284421957135124,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a324aab73278545cee719b1fcca93b3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc32f58cf825ccdbbf259eff3bb3e6200518ff2c64f809d5c9a2e6f37eb403f6,PodSandboxId:05b1ba8dcbd88fcb3ba5bc206f2da14733df44a779d3b63a5fd52a4e12a94108,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713284421899287334,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f29d82586b2f96163d64927fe37a1db,},Annotations:map[string]string{io.kubernetes.container.hash: af1f417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc018dd3206a6464badea06cd3e59181ccc6f140e466f7cfd874ddf78ffeac90,PodSandboxId:0c5955052bf291491ad5f9a2ddb5d1ddcc3cab193db3fa3645b2f4f2286cd224,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8ef
aab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713284421860962524,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbd326adaab28fe35f8c5d387a0cb69,},Annotations:map[string]string{io.kubernetes.container.hash: e0314044,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d35da996-fb47-495b-a740-ecd48848e38b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.477043873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83508104-5379-4d67-b659-d2c2347ffd6a name=/runtime.v1.RuntimeService/Version
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.477140105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83508104-5379-4d67-b659-d2c2347ffd6a name=/runtime.v1.RuntimeService/Version
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.478702779Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd375671-4b2b-49b4-b57e-b3dc5637d560 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.479954593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713284713479929650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd375671-4b2b-49b4-b57e-b3dc5637d560 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.481015095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f6a48e2-31a7-43ab-a6b8-512ce2992154 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.481103810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f6a48e2-31a7-43ab-a6b8-512ce2992154 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:25:13 addons-320546 crio[683]: time="2024-04-16 16:25:13.481395394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9401ff45b5d11283c5c59ae7f167f1f594bf77879147bf8ee4321d8a275890e4,PodSandboxId:5bc38316d925ee330817fe127dfc5bf1479b1b3be27564b0cb8a9021c1787988,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713284705329659914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2dnfr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ac1098f-abbb-402f-ae6e-dfe5334735ab,},Annotations:map[string]string{io.kubernetes.container.hash: 8d04598f,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896f1ee148c27db5403d97a1be3fd2817d3837c2e008dc606cb27ecfc36355c,PodSandboxId:a1cc71ec3d2931964232fef58da3e9c28f77d624d0ef15dc285244a190b67351,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713284563119900593,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddf62631-814a-41e0-96ed-ec74b1056618,},Annotations:map[string]string{io.kubern
etes.container.hash: 1c3cccb4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3fb530ad64c093b25199ad71960a382f17a6948ad1fdc280b922a5de98aa1a2,PodSandboxId:205342eeb83082fc24d1b2247b274faf11a6986b67f0be1ae1d26ed6d981f095,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713284561112499530,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4jpnf,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 1a3eee24-c904-4664-822f-114064b24f70,},Annotations:map[string]string{io.kubernetes.container.hash: 3770ef7b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc70e0c152569f61497ab49f7123a8e4edb0f4a2ed195d3b3adbe33a2e7f5d61,PodSandboxId:c53001014bdc9d0155e824cd255fdddc96c7ae4bc070c62b86ee4ac2bf465308,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713284523176094528,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-tg942,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c9e3b77c-17d5-4889-bc47-7cb21cbbdff4,},Annotations:map[string]string{io.kubernetes.container.hash: 71abd316,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd835adb6172b7d84080d851be26632a2c6096247c2edfbd75b6715065730b8,PodSandboxId:2b434e9457079983abc0b8c7d6fa01ad208544772440cf9cea2707beae7ae188,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713284500652701253,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xl29,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcb9cbf3-8cf2-4652-aceb-874934987dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 996fd009,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b151b45103436f0d6fa8d418a31137e2778913ef939c8b9e7b4b70e072d2c4b,PodSandboxId:9eb79a003bd7f16526ea867796b63b575238d923bf9e595945985746a71fee58,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1713284499944754720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vdlvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 89a78b50-db68-4b4d-a279-a651004f9b53,},Annotations:map[string]string{io.kubernetes.container.hash: cd9a09ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4df4bf9a048912d7b7331a47d7f6ad64faf70474c6fb526c15a85750d828a6e5,PodSandboxId:93c359a1c11f0459a779c9b7ca14bd3bb9c8f5146ce8ecc8d7e66b762c530dd8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1713284490144383124,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-w94qg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 208a058d-7e12-4509-b543-5e14c69bcf86,},Annotations:map[string]string{io.kubernetes.container.hash: 7fc6d6dc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34b6a9cb14c99ae2ee1cf599b601f1bd6cd9740f7a1db3283a5356141dae9d3,PodSandboxId:ab9aabf53cdf080c936c507966ec3eb048c62a036d77ac83a885b4f7ebc5f405,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f
40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713284449388427282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6312df28-a808-4bc9-a458-fdeefa768264,},Annotations:map[string]string{io.kubernetes.container.hash: fdc9d727,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7eb647612d34be2541874b5f2af3d9972ee7701ba0f9c678e2e331039d932a4,PodSandboxId:04f694d2fd7944cadd997b2e5b62942042bc9a0bf08fae5774222a9321cb5398,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713284444528890228,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-69q4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3aa8aee-4f59-444f-998b-51d1069b4b2f,},Annotations:map[string]string{io.kubernetes.container.hash: a9fde3f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28c5ebbb9b27ea7396c30d95e1227a3c09d62da4728426c9d864d4f2ed9975ef,PodSandboxId:4c6ef2e751fe080890d1f1af47511f1d53f075965d1f0eb715aec72524e120c8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,
},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713284441563989092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkm8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adbf2174-84c1-4d5e-92f6-fa177c06a454,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae3447f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05f9380451a2ffd3f1edb0fcc54e1ecdb3934158577e5cbba2fb83e37ee5fd,PodSandboxId:c85f0187a7817172e4c620b8badb99e09f2b84107269a83e20923518aa4cbe34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c39
0d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713284421946946579,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9c888cb29cd726e8c3c1fcd3a30a108,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4230602147e4651f3a1525c87ee453684e63cd20d72e538f9fcec32778f4279a,PodSandboxId:66f1fb983c459f4ea7acefc00da6e7a92711b6df9f0bce549a98dbca64a9f9ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f9
7387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713284421957135124,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a324aab73278545cee719b1fcca93b3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc32f58cf825ccdbbf259eff3bb3e6200518ff2c64f809d5c9a2e6f37eb403f6,PodSandboxId:05b1ba8dcbd88fcb3ba5bc206f2da14733df44a779d3b63a5fd52a4e12a94108,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713284421899287334,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f29d82586b2f96163d64927fe37a1db,},Annotations:map[string]string{io.kubernetes.container.hash: af1f417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc018dd3206a6464badea06cd3e59181ccc6f140e466f7cfd874ddf78ffeac90,PodSandboxId:0c5955052bf291491ad5f9a2ddb5d1ddcc3cab193db3fa3645b2f4f2286cd224,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8ef
aab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713284421860962524,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-320546,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fbd326adaab28fe35f8c5d387a0cb69,},Annotations:map[string]string{io.kubernetes.container.hash: e0314044,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f6a48e2-31a7-43ab-a6b8-512ce2992154 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9401ff45b5d11       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   5bc38316d925e       hello-world-app-5d77478584-2dnfr
	4896f1ee148c2       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   a1cc71ec3d293       nginx
	a3fb530ad64c0       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   205342eeb8308       headlamp-5b77dbd7c4-4jpnf
	bc70e0c152569       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   c53001014bdc9       gcp-auth-7d69788767-tg942
	bbd835adb6172       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                             3 minutes ago       Exited              patch                     1                   2b434e9457079       ingress-nginx-admission-patch-7xl29
	1b151b4510343       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   9eb79a003bd7f       ingress-nginx-admission-create-vdlvd
	4df4bf9a04891       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   93c359a1c11f0       yakd-dashboard-9947fc6bf-w94qg
	b34b6a9cb14c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   ab9aabf53cdf0       storage-provisioner
	a7eb647612d34       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   04f694d2fd794       coredns-76f75df574-69q4z
	28c5ebbb9b27e       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             4 minutes ago       Running             kube-proxy                0                   4c6ef2e751fe0       kube-proxy-vkm8w
	4230602147e46       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             4 minutes ago       Running             kube-controller-manager   0                   66f1fb983c459       kube-controller-manager-addons-320546
	fc05f9380451a       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             4 minutes ago       Running             kube-scheduler            0                   c85f0187a7817       kube-scheduler-addons-320546
	fc32f58cf825c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   05b1ba8dcbd88       etcd-addons-320546
	dc018dd3206a6       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             4 minutes ago       Running             kube-apiserver            0                   0c5955052bf29       kube-apiserver-addons-320546
	
	
	==> coredns [a7eb647612d34be2541874b5f2af3d9972ee7701ba0f9c678e2e331039d932a4] <==
	[INFO] 10.244.0.9:53292 - 22179 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000832689s
	[INFO] 10.244.0.9:40756 - 8897 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013545s
	[INFO] 10.244.0.9:40756 - 7876 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091688s
	[INFO] 10.244.0.9:60909 - 21915 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109621s
	[INFO] 10.244.0.9:60909 - 17561 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092028s
	[INFO] 10.244.0.9:46536 - 1371 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123395s
	[INFO] 10.244.0.9:46536 - 33625 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087468s
	[INFO] 10.244.0.9:39231 - 18062 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000086361s
	[INFO] 10.244.0.9:39231 - 57475 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105825s
	[INFO] 10.244.0.9:34700 - 2959 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000198319s
	[INFO] 10.244.0.9:34700 - 141 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113525s
	[INFO] 10.244.0.9:39449 - 48126 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073223s
	[INFO] 10.244.0.9:39449 - 29951 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000181995s
	[INFO] 10.244.0.9:46665 - 34355 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081073s
	[INFO] 10.244.0.9:46665 - 3890 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000027379s
	[INFO] 10.244.0.22:34296 - 48908 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000221362s
	[INFO] 10.244.0.22:43447 - 32005 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00024761s
	[INFO] 10.244.0.22:46722 - 54237 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094829s
	[INFO] 10.244.0.22:38356 - 63144 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000055453s
	[INFO] 10.244.0.22:58221 - 38778 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113162s
	[INFO] 10.244.0.22:51782 - 53433 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069042s
	[INFO] 10.244.0.22:50942 - 4919 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000885268s
	[INFO] 10.244.0.22:55649 - 14297 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.000701662s
	[INFO] 10.244.0.26:41627 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000454489s
	[INFO] 10.244.0.26:50391 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000182919s
	
	
	==> describe nodes <==
	Name:               addons-320546
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-320546
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=addons-320546
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_20_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-320546
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:20:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-320546
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:25:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:23:02 +0000   Tue, 16 Apr 2024 16:20:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:23:02 +0000   Tue, 16 Apr 2024 16:20:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:23:02 +0000   Tue, 16 Apr 2024 16:20:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:23:02 +0000   Tue, 16 Apr 2024 16:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    addons-320546
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 adab0f2e07a046a29471a34800dc0be8
	  System UUID:                adab0f2e-07a0-46a2-9471-a34800dc0be8
	  Boot ID:                    7283ccb1-1385-4c72-a6f8-6ae947fc2933
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-2dnfr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  gcp-auth                    gcp-auth-7d69788767-tg942                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  headlamp                    headlamp-5b77dbd7c4-4jpnf                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 coredns-76f75df574-69q4z                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m33s
	  kube-system                 etcd-addons-320546                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-apiserver-addons-320546             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-addons-320546    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-vkm8w                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-addons-320546             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-w94qg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m31s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node addons-320546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node addons-320546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node addons-320546 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m44s  kubelet          Node addons-320546 status is now: NodeReady
	  Normal  RegisteredNode           4m34s  node-controller  Node addons-320546 event: Registered Node addons-320546 in Controller
	
	
	==> dmesg <==
	[  +0.084055] kauditd_printk_skb: 30 callbacks suppressed
	[ +11.857094] systemd-fstab-generator[1480]: Ignoring "noauto" option for root device
	[  +0.145674] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.002045] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.071340] kauditd_printk_skb: 117 callbacks suppressed
	[  +5.855364] kauditd_printk_skb: 74 callbacks suppressed
	[Apr16 16:21] kauditd_printk_skb: 34 callbacks suppressed
	[ +11.812931] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.025359] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.695472] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.137505] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.499990] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.170476] kauditd_printk_skb: 31 callbacks suppressed
	[  +7.120253] kauditd_printk_skb: 13 callbacks suppressed
	[Apr16 16:22] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.035540] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.214208] kauditd_printk_skb: 43 callbacks suppressed
	[  +8.997303] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.769119] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.036692] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.388840] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.339289] kauditd_printk_skb: 31 callbacks suppressed
	[  +8.393704] kauditd_printk_skb: 33 callbacks suppressed
	[Apr16 16:25] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.929352] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [fc32f58cf825ccdbbf259eff3bb3e6200518ff2c64f809d5c9a2e6f37eb403f6] <==
	{"level":"warn","ts":"2024-04-16T16:22:08.824864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.18013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-16T16:22:08.8249Z","caller":"traceutil/trace.go:171","msg":"trace[1911186728] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:0; response_revision:1208; }","duration":"178.291479ms","start":"2024-04-16T16:22:08.646601Z","end":"2024-04-16T16:22:08.824892Z","steps":["trace[1911186728] 'agreement among raft nodes before linearized reading'  (duration: 178.0519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:08.824907Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.891818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-320546\" ","response":"range_response_count:1 size:10221"}
	{"level":"info","ts":"2024-04-16T16:22:08.824933Z","caller":"traceutil/trace.go:171","msg":"trace[1001519268] range","detail":"{range_begin:/registry/minions/addons-320546; range_end:; response_count:1; response_revision:1208; }","duration":"152.94668ms","start":"2024-04-16T16:22:08.67198Z","end":"2024-04-16T16:22:08.824926Z","steps":["trace[1001519268] 'agreement among raft nodes before linearized reading'  (duration: 152.850181ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:08.82505Z","caller":"traceutil/trace.go:171","msg":"trace[1276557084] transaction","detail":"{read_only:false; response_revision:1208; number_of_response:1; }","duration":"340.745338ms","start":"2024-04-16T16:22:08.484296Z","end":"2024-04-16T16:22:08.825041Z","steps":["trace[1276557084] 'process raft request'  (duration: 340.061982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:08.825108Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T16:22:08.484284Z","time spent":"340.779656ms","remote":"127.0.0.1:37632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1190 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-16T16:22:23.144783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.885881ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T16:22:23.1449Z","caller":"traceutil/trace.go:171","msg":"trace[500868337] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1373; }","duration":"151.028036ms","start":"2024-04-16T16:22:22.99385Z","end":"2024-04-16T16:22:23.144878Z","steps":["trace[500868337] 'range keys from in-memory index tree'  (duration: 150.837395ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:23.14513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.666119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-04-16T16:22:23.145152Z","caller":"traceutil/trace.go:171","msg":"trace[2121023343] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1373; }","duration":"220.724516ms","start":"2024-04-16T16:22:22.924421Z","end":"2024-04-16T16:22:23.145145Z","steps":["trace[2121023343] 'range keys from in-memory index tree'  (duration: 220.548002ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:23.145264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.470615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:1 size:2082"}
	{"level":"info","ts":"2024-04-16T16:22:23.14528Z","caller":"traceutil/trace.go:171","msg":"trace[1611705289] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1373; }","duration":"211.515011ms","start":"2024-04-16T16:22:22.933758Z","end":"2024-04-16T16:22:23.145273Z","steps":["trace[1611705289] 'range keys from in-memory index tree'  (duration: 211.359494ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:28.451197Z","caller":"traceutil/trace.go:171","msg":"trace[337874363] linearizableReadLoop","detail":"{readStateIndex:1456; appliedIndex:1455; }","duration":"198.874457ms","start":"2024-04-16T16:22:28.252308Z","end":"2024-04-16T16:22:28.451182Z","steps":["trace[337874363] 'read index received'  (duration: 198.691425ms)","trace[337874363] 'applied index is now lower than readState.Index'  (duration: 180.726µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T16:22:28.451377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.051429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6511"}
	{"level":"info","ts":"2024-04-16T16:22:28.451401Z","caller":"traceutil/trace.go:171","msg":"trace[1456535063] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1411; }","duration":"199.109626ms","start":"2024-04-16T16:22:28.252284Z","end":"2024-04-16T16:22:28.451393Z","steps":["trace[1456535063] 'agreement among raft nodes before linearized reading'  (duration: 198.982354ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:28.451438Z","caller":"traceutil/trace.go:171","msg":"trace[1875090143] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1411; }","duration":"213.994721ms","start":"2024-04-16T16:22:28.237429Z","end":"2024-04-16T16:22:28.451424Z","steps":["trace[1875090143] 'process raft request'  (duration: 213.615845ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:39.442255Z","caller":"traceutil/trace.go:171","msg":"trace[1984355651] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"193.818738ms","start":"2024-04-16T16:22:39.248421Z","end":"2024-04-16T16:22:39.44224Z","steps":["trace[1984355651] 'process raft request'  (duration: 193.574215ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T16:22:40.949356Z","caller":"traceutil/trace.go:171","msg":"trace[51831000] linearizableReadLoop","detail":"{readStateIndex:1609; appliedIndex:1608; }","duration":"299.148073ms","start":"2024-04-16T16:22:40.650195Z","end":"2024-04-16T16:22:40.949343Z","steps":["trace[51831000] 'read index received'  (duration: 298.915922ms)","trace[51831000] 'applied index is now lower than readState.Index'  (duration: 231.359µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T16:22:40.94951Z","caller":"traceutil/trace.go:171","msg":"trace[601050748] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"411.042039ms","start":"2024-04-16T16:22:40.53846Z","end":"2024-04-16T16:22:40.949502Z","steps":["trace[601050748] 'process raft request'  (duration: 410.759572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:40.950197Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T16:22:40.538368Z","time spent":"411.733652ms","remote":"127.0.0.1:37722","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1526 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-04-16T16:22:40.949713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.741803ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5985"}
	{"level":"info","ts":"2024-04-16T16:22:40.950494Z","caller":"traceutil/trace.go:171","msg":"trace[2006550871] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1561; }","duration":"170.554927ms","start":"2024-04-16T16:22:40.779924Z","end":"2024-04-16T16:22:40.950479Z","steps":["trace[2006550871] 'agreement among raft nodes before linearized reading'  (duration: 169.689089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:40.949799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.603039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-04-16T16:22:40.95106Z","caller":"traceutil/trace.go:171","msg":"trace[463668283] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:1; response_revision:1561; }","duration":"300.860044ms","start":"2024-04-16T16:22:40.65019Z","end":"2024-04-16T16:22:40.95105Z","steps":["trace[463668283] 'agreement among raft nodes before linearized reading'  (duration: 299.542954ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:22:40.951118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T16:22:40.650158Z","time spent":"300.949319ms","remote":"127.0.0.1:37722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":521,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
	
	
	==> gcp-auth [bc70e0c152569f61497ab49f7123a8e4edb0f4a2ed195d3b3adbe33a2e7f5d61] <==
	2024/04/16 16:22:03 GCP Auth Webhook started!
	2024/04/16 16:22:04 Ready to marshal response ...
	2024/04/16 16:22:04 Ready to write response ...
	2024/04/16 16:22:04 Ready to marshal response ...
	2024/04/16 16:22:04 Ready to write response ...
	2024/04/16 16:22:11 Ready to marshal response ...
	2024/04/16 16:22:11 Ready to write response ...
	2024/04/16 16:22:14 Ready to marshal response ...
	2024/04/16 16:22:14 Ready to write response ...
	2024/04/16 16:22:17 Ready to marshal response ...
	2024/04/16 16:22:17 Ready to write response ...
	2024/04/16 16:22:21 Ready to marshal response ...
	2024/04/16 16:22:21 Ready to write response ...
	2024/04/16 16:22:35 Ready to marshal response ...
	2024/04/16 16:22:35 Ready to write response ...
	2024/04/16 16:22:35 Ready to marshal response ...
	2024/04/16 16:22:35 Ready to write response ...
	2024/04/16 16:22:35 Ready to marshal response ...
	2024/04/16 16:22:35 Ready to write response ...
	2024/04/16 16:22:35 Ready to marshal response ...
	2024/04/16 16:22:35 Ready to write response ...
	2024/04/16 16:22:38 Ready to marshal response ...
	2024/04/16 16:22:38 Ready to write response ...
	2024/04/16 16:25:02 Ready to marshal response ...
	2024/04/16 16:25:02 Ready to write response ...
	
	
	==> kernel <==
	 16:25:13 up 5 min,  0 users,  load average: 0.73, 1.05, 0.54
	Linux addons-320546 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dc018dd3206a6464badea06cd3e59181ccc6f140e466f7cfd874ddf78ffeac90] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0416 16:21:32.517182       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.1.52:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.1.52:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
	I0416 16:21:32.541453       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0416 16:22:24.277348       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0416 16:22:25.308121       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0416 16:22:27.701872       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0416 16:22:31.321117       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0416 16:22:33.526240       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0416 16:22:35.307085       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.24.73"}
	I0416 16:22:35.546282       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0416 16:22:35.774817       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.228.179"}
	I0416 16:22:57.403268       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0416 16:22:57.403344       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0416 16:22:57.423670       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0416 16:22:57.423731       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0416 16:22:57.436699       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0416 16:22:57.442368       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0416 16:22:57.454327       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0416 16:22:57.455462       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0416 16:22:57.482663       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0416 16:22:57.482754       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0416 16:22:58.456268       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0416 16:22:58.483036       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0416 16:22:58.494453       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0416 16:25:02.563009       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.215.171"}
	
	
	==> kube-controller-manager [4230602147e4651f3a1525c87ee453684e63cd20d72e538f9fcec32778f4279a] <==
	W0416 16:23:58.225503       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:23:58.225717       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0416 16:24:22.626863       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:24:22.627088       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0416 16:24:24.257052       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:24:24.257156       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0416 16:24:27.206468       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:24:27.206633       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0416 16:24:40.180172       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:24:40.180242       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0416 16:24:56.536702       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:24:56.536769       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0416 16:25:02.359522       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0416 16:25:02.406709       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-2dnfr"
	I0416 16:25:02.409697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.671895ms"
	I0416 16:25:02.440893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.135459ms"
	I0416 16:25:02.441132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="93.471µs"
	I0416 16:25:02.453496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.761µs"
	I0416 16:25:05.487066       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0416 16:25:05.492204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="3.901µs"
	I0416 16:25:05.500706       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0416 16:25:05.587000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.050736ms"
	I0416 16:25:05.587706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="171.249µs"
	W0416 16:25:09.534357       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0416 16:25:09.534395       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [28c5ebbb9b27ea7396c30d95e1227a3c09d62da4728426c9d864d4f2ed9975ef] <==
	I0416 16:20:42.600714       1 server_others.go:72] "Using iptables proxy"
	I0416 16:20:42.634045       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.101"]
	I0416 16:20:42.753599       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:20:42.753650       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:20:42.753663       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:20:42.758905       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:20:42.759118       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:20:42.759157       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:20:42.760319       1 config.go:188] "Starting service config controller"
	I0416 16:20:42.760364       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:20:42.760386       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:20:42.760389       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:20:42.760899       1 config.go:315] "Starting node config controller"
	I0416 16:20:42.760905       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:20:42.860452       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:20:42.860496       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:20:42.861311       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fc05f9380451a2ffd3f1edb0fcc54e1ecdb3934158577e5cbba2fb83e37ee5fd] <==
	W0416 16:20:24.727515       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:24.727658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:24.727733       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 16:20:24.727762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 16:20:24.727865       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:24.727945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:25.611121       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:20:25.611227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:20:25.661882       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:25.662172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:25.675162       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:20:25.675277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:20:25.752398       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:20:25.752718       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:20:25.787198       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:20:25.787296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:20:25.841814       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 16:20:25.841935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 16:20:25.910183       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 16:20:25.910340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:20:25.925309       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 16:20:25.926930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 16:20:25.927764       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:20:25.928098       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 16:20:28.308179       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 16:25:02 addons-320546 kubelet[1282]: I0416 16:25:02.420724    1282 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6aa7ab0-a826-4e5b-b825-ddaaa156b7f6" containerName="csi-attacher"
	Apr 16 16:25:02 addons-320546 kubelet[1282]: I0416 16:25:02.420755    1282 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cda063-1213-4803-9cc2-e992215d9225" containerName="csi-provisioner"
	Apr 16 16:25:02 addons-320546 kubelet[1282]: I0416 16:25:02.493041    1282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7ffm\" (UniqueName: \"kubernetes.io/projected/0ac1098f-abbb-402f-ae6e-dfe5334735ab-kube-api-access-c7ffm\") pod \"hello-world-app-5d77478584-2dnfr\" (UID: \"0ac1098f-abbb-402f-ae6e-dfe5334735ab\") " pod="default/hello-world-app-5d77478584-2dnfr"
	Apr 16 16:25:02 addons-320546 kubelet[1282]: I0416 16:25:02.493109    1282 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0ac1098f-abbb-402f-ae6e-dfe5334735ab-gcp-creds\") pod \"hello-world-app-5d77478584-2dnfr\" (UID: \"0ac1098f-abbb-402f-ae6e-dfe5334735ab\") " pod="default/hello-world-app-5d77478584-2dnfr"
	Apr 16 16:25:03 addons-320546 kubelet[1282]: I0416 16:25:03.704349    1282 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxwcw\" (UniqueName: \"kubernetes.io/projected/9581dcbf-6a10-463a-bfda-8e35065cd1df-kube-api-access-sxwcw\") pod \"9581dcbf-6a10-463a-bfda-8e35065cd1df\" (UID: \"9581dcbf-6a10-463a-bfda-8e35065cd1df\") "
	Apr 16 16:25:03 addons-320546 kubelet[1282]: I0416 16:25:03.724226    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9581dcbf-6a10-463a-bfda-8e35065cd1df-kube-api-access-sxwcw" (OuterVolumeSpecName: "kube-api-access-sxwcw") pod "9581dcbf-6a10-463a-bfda-8e35065cd1df" (UID: "9581dcbf-6a10-463a-bfda-8e35065cd1df"). InnerVolumeSpecName "kube-api-access-sxwcw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 16 16:25:03 addons-320546 kubelet[1282]: I0416 16:25:03.805009    1282 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sxwcw\" (UniqueName: \"kubernetes.io/projected/9581dcbf-6a10-463a-bfda-8e35065cd1df-kube-api-access-sxwcw\") on node \"addons-320546\" DevicePath \"\""
	Apr 16 16:25:04 addons-320546 kubelet[1282]: I0416 16:25:04.516436    1282 scope.go:117] "RemoveContainer" containerID="06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400"
	Apr 16 16:25:04 addons-320546 kubelet[1282]: I0416 16:25:04.605441    1282 scope.go:117] "RemoveContainer" containerID="06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400"
	Apr 16 16:25:04 addons-320546 kubelet[1282]: E0416 16:25:04.606496    1282 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400\": container with ID starting with 06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400 not found: ID does not exist" containerID="06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400"
	Apr 16 16:25:04 addons-320546 kubelet[1282]: I0416 16:25:04.606619    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400"} err="failed to get container status \"06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400\": rpc error: code = NotFound desc = could not find container \"06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400\": container with ID starting with 06af7ead5167c15d4fd3d73a8c8b0b9e33b0e5c91773ab63655a4450b03e9400 not found: ID does not exist"
	Apr 16 16:25:04 addons-320546 kubelet[1282]: I0416 16:25:04.810638    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9581dcbf-6a10-463a-bfda-8e35065cd1df" path="/var/lib/kubelet/pods/9581dcbf-6a10-463a-bfda-8e35065cd1df/volumes"
	Apr 16 16:25:06 addons-320546 kubelet[1282]: I0416 16:25:06.810170    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a78b50-db68-4b4d-a279-a651004f9b53" path="/var/lib/kubelet/pods/89a78b50-db68-4b4d-a279-a651004f9b53/volumes"
	Apr 16 16:25:06 addons-320546 kubelet[1282]: I0416 16:25:06.810727    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcb9cbf3-8cf2-4652-aceb-874934987dd2" path="/var/lib/kubelet/pods/fcb9cbf3-8cf2-4652-aceb-874934987dd2/volumes"
	Apr 16 16:25:08 addons-320546 kubelet[1282]: I0416 16:25:08.747307    1282 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98de88e5-c0da-4f7a-9731-95da5de4d95a-webhook-cert\") pod \"98de88e5-c0da-4f7a-9731-95da5de4d95a\" (UID: \"98de88e5-c0da-4f7a-9731-95da5de4d95a\") "
	Apr 16 16:25:08 addons-320546 kubelet[1282]: I0416 16:25:08.747767    1282 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbzmx\" (UniqueName: \"kubernetes.io/projected/98de88e5-c0da-4f7a-9731-95da5de4d95a-kube-api-access-wbzmx\") pod \"98de88e5-c0da-4f7a-9731-95da5de4d95a\" (UID: \"98de88e5-c0da-4f7a-9731-95da5de4d95a\") "
	Apr 16 16:25:08 addons-320546 kubelet[1282]: I0416 16:25:08.749958    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98de88e5-c0da-4f7a-9731-95da5de4d95a-kube-api-access-wbzmx" (OuterVolumeSpecName: "kube-api-access-wbzmx") pod "98de88e5-c0da-4f7a-9731-95da5de4d95a" (UID: "98de88e5-c0da-4f7a-9731-95da5de4d95a"). InnerVolumeSpecName "kube-api-access-wbzmx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 16 16:25:08 addons-320546 kubelet[1282]: I0416 16:25:08.750841    1282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98de88e5-c0da-4f7a-9731-95da5de4d95a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "98de88e5-c0da-4f7a-9731-95da5de4d95a" (UID: "98de88e5-c0da-4f7a-9731-95da5de4d95a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 16 16:25:08 addons-320546 kubelet[1282]: I0416 16:25:08.809262    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98de88e5-c0da-4f7a-9731-95da5de4d95a" path="/var/lib/kubelet/pods/98de88e5-c0da-4f7a-9731-95da5de4d95a/volumes"
	Apr 16 16:25:08 addons-320546 kubelet[1282]: I0416 16:25:08.849020    1282 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98de88e5-c0da-4f7a-9731-95da5de4d95a-webhook-cert\") on node \"addons-320546\" DevicePath \"\""
	Apr 16 16:25:08 addons-320546 kubelet[1282]: I0416 16:25:08.849076    1282 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wbzmx\" (UniqueName: \"kubernetes.io/projected/98de88e5-c0da-4f7a-9731-95da5de4d95a-kube-api-access-wbzmx\") on node \"addons-320546\" DevicePath \"\""
	Apr 16 16:25:09 addons-320546 kubelet[1282]: I0416 16:25:09.556728    1282 scope.go:117] "RemoveContainer" containerID="58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1"
	Apr 16 16:25:09 addons-320546 kubelet[1282]: I0416 16:25:09.570695    1282 scope.go:117] "RemoveContainer" containerID="58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1"
	Apr 16 16:25:09 addons-320546 kubelet[1282]: E0416 16:25:09.571230    1282 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1\": container with ID starting with 58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1 not found: ID does not exist" containerID="58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1"
	Apr 16 16:25:09 addons-320546 kubelet[1282]: I0416 16:25:09.571273    1282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1"} err="failed to get container status \"58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1\": rpc error: code = NotFound desc = could not find container \"58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1\": container with ID starting with 58070cb0975223d9d5eb1dba67a67a59fa8cbe29ddb3f69919b1c90a3d5f9ba1 not found: ID does not exist"
	
	
	==> storage-provisioner [b34b6a9cb14c99ae2ee1cf599b601f1bd6cd9740f7a1db3283a5356141dae9d3] <==
	I0416 16:20:50.332642       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 16:20:50.426112       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 16:20:50.426175       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 16:20:50.445249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 16:20:50.445459       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-320546_a0bf9631-72c2-4d4e-a7ae-1c0363785a4f!
	I0416 16:20:50.453975       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f587e86-13ca-492c-9a00-791bc6ee15f6", APIVersion:"v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-320546_a0bf9631-72c2-4d4e-a7ae-1c0363785a4f became leader
	I0416 16:20:50.547602       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-320546_a0bf9631-72c2-4d4e-a7ae-1c0363785a4f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-320546 -n addons-320546
helpers_test.go:261: (dbg) Run:  kubectl --context addons-320546 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (159.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-320546
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-320546: exit status 82 (2m0.484817413s)

                                                
                                                
-- stdout --
	* Stopping node "addons-320546"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-320546" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-320546
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-320546: exit status 11 (21.515833082s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-320546" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-320546
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-320546: exit status 11 (6.14406707s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-320546" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-320546
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-320546: exit status 11 (6.143452057s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.101:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-320546" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (14.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-711095
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image load --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr
E0416 16:32:24.370873   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image load --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr: (11.446694913s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image ls: (2.343894063s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-711095" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (14.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 node stop m02 -v=7 --alsologtostderr
E0416 16:37:30.512559   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:31.574281   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:37:50.992750   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:38:31.953847   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.481691756s)

                                                
                                                
-- stdout --
	* Stopping node "ha-543552-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:37:29.352720   24939 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:37:29.352880   24939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:37:29.352891   24939 out.go:304] Setting ErrFile to fd 2...
	I0416 16:37:29.352895   24939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:37:29.353054   24939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:37:29.353359   24939 mustload.go:65] Loading cluster: ha-543552
	I0416 16:37:29.353799   24939 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:37:29.353818   24939 stop.go:39] StopHost: ha-543552-m02
	I0416 16:37:29.354233   24939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:37:29.354280   24939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:37:29.369250   24939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0416 16:37:29.369723   24939 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:37:29.370244   24939 main.go:141] libmachine: Using API Version  1
	I0416 16:37:29.370264   24939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:37:29.370666   24939 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:37:29.373081   24939 out.go:177] * Stopping node "ha-543552-m02"  ...
	I0416 16:37:29.374307   24939 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 16:37:29.374335   24939 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:37:29.374575   24939 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 16:37:29.374600   24939 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:37:29.377699   24939 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:37:29.378173   24939 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:37:29.378212   24939 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:37:29.378317   24939 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:37:29.378515   24939 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:37:29.378661   24939 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:37:29.378786   24939 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:37:29.466123   24939 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 16:37:29.521302   24939 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 16:37:29.576828   24939 main.go:141] libmachine: Stopping "ha-543552-m02"...
	I0416 16:37:29.576869   24939 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:37:29.578453   24939 main.go:141] libmachine: (ha-543552-m02) Calling .Stop
	I0416 16:37:29.582291   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 0/120
	I0416 16:37:30.583971   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 1/120
	I0416 16:37:31.585257   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 2/120
	I0416 16:37:32.587317   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 3/120
	I0416 16:37:33.588484   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 4/120
	I0416 16:37:34.590608   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 5/120
	I0416 16:37:35.591944   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 6/120
	I0416 16:37:36.593254   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 7/120
	I0416 16:37:37.595227   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 8/120
	I0416 16:37:38.597242   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 9/120
	I0416 16:37:39.599509   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 10/120
	I0416 16:37:40.600681   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 11/120
	I0416 16:37:41.602301   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 12/120
	I0416 16:37:42.604434   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 13/120
	I0416 16:37:43.605960   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 14/120
	I0416 16:37:44.607938   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 15/120
	I0416 16:37:45.609458   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 16/120
	I0416 16:37:46.611370   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 17/120
	I0416 16:37:47.612729   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 18/120
	I0416 16:37:48.614371   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 19/120
	I0416 16:37:49.616604   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 20/120
	I0416 16:37:50.618171   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 21/120
	I0416 16:37:51.620104   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 22/120
	I0416 16:37:52.621551   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 23/120
	I0416 16:37:53.622948   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 24/120
	I0416 16:37:54.625163   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 25/120
	I0416 16:37:55.627269   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 26/120
	I0416 16:37:56.628636   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 27/120
	I0416 16:37:57.630081   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 28/120
	I0416 16:37:58.631587   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 29/120
	I0416 16:37:59.633806   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 30/120
	I0416 16:38:00.635136   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 31/120
	I0416 16:38:01.636429   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 32/120
	I0416 16:38:02.638303   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 33/120
	I0416 16:38:03.640200   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 34/120
	I0416 16:38:04.642435   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 35/120
	I0416 16:38:05.643787   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 36/120
	I0416 16:38:06.645113   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 37/120
	I0416 16:38:07.647470   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 38/120
	I0416 16:38:08.648772   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 39/120
	I0416 16:38:09.651136   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 40/120
	I0416 16:38:10.652538   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 41/120
	I0416 16:38:11.653851   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 42/120
	I0416 16:38:12.655285   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 43/120
	I0416 16:38:13.656683   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 44/120
	I0416 16:38:14.658964   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 45/120
	I0416 16:38:15.660343   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 46/120
	I0416 16:38:16.661832   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 47/120
	I0416 16:38:17.663709   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 48/120
	I0416 16:38:18.665269   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 49/120
	I0416 16:38:19.667467   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 50/120
	I0416 16:38:20.668809   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 51/120
	I0416 16:38:21.670059   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 52/120
	I0416 16:38:22.671527   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 53/120
	I0416 16:38:23.672728   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 54/120
	I0416 16:38:24.673997   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 55/120
	I0416 16:38:25.675447   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 56/120
	I0416 16:38:26.677056   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 57/120
	I0416 16:38:27.679344   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 58/120
	I0416 16:38:28.680862   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 59/120
	I0416 16:38:29.683071   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 60/120
	I0416 16:38:30.684631   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 61/120
	I0416 16:38:31.686307   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 62/120
	I0416 16:38:32.687702   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 63/120
	I0416 16:38:33.689270   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 64/120
	I0416 16:38:34.691084   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 65/120
	I0416 16:38:35.692909   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 66/120
	I0416 16:38:36.694278   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 67/120
	I0416 16:38:37.695541   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 68/120
	I0416 16:38:38.697021   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 69/120
	I0416 16:38:39.699315   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 70/120
	I0416 16:38:40.701005   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 71/120
	I0416 16:38:41.702528   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 72/120
	I0416 16:38:42.704759   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 73/120
	I0416 16:38:43.705967   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 74/120
	I0416 16:38:44.707741   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 75/120
	I0416 16:38:45.709446   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 76/120
	I0416 16:38:46.711450   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 77/120
	I0416 16:38:47.712949   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 78/120
	I0416 16:38:48.714361   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 79/120
	I0416 16:38:49.716670   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 80/120
	I0416 16:38:50.718063   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 81/120
	I0416 16:38:51.719757   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 82/120
	I0416 16:38:52.721145   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 83/120
	I0416 16:38:53.722538   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 84/120
	I0416 16:38:54.724204   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 85/120
	I0416 16:38:55.725439   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 86/120
	I0416 16:38:56.726734   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 87/120
	I0416 16:38:57.728022   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 88/120
	I0416 16:38:58.729517   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 89/120
	I0416 16:38:59.731757   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 90/120
	I0416 16:39:00.733249   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 91/120
	I0416 16:39:01.734672   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 92/120
	I0416 16:39:02.736094   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 93/120
	I0416 16:39:03.737507   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 94/120
	I0416 16:39:04.739342   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 95/120
	I0416 16:39:05.740971   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 96/120
	I0416 16:39:06.742350   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 97/120
	I0416 16:39:07.744096   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 98/120
	I0416 16:39:08.745442   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 99/120
	I0416 16:39:09.747476   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 100/120
	I0416 16:39:10.749155   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 101/120
	I0416 16:39:11.751456   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 102/120
	I0416 16:39:12.752770   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 103/120
	I0416 16:39:13.754134   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 104/120
	I0416 16:39:14.756538   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 105/120
	I0416 16:39:15.757981   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 106/120
	I0416 16:39:16.759717   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 107/120
	I0416 16:39:17.760907   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 108/120
	I0416 16:39:18.762210   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 109/120
	I0416 16:39:19.764486   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 110/120
	I0416 16:39:20.765841   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 111/120
	I0416 16:39:21.767404   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 112/120
	I0416 16:39:22.768651   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 113/120
	I0416 16:39:23.770083   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 114/120
	I0416 16:39:24.772155   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 115/120
	I0416 16:39:25.773580   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 116/120
	I0416 16:39:26.775230   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 117/120
	I0416 16:39:27.776562   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 118/120
	I0416 16:39:28.777700   24939 main.go:141] libmachine: (ha-543552-m02) Waiting for machine to stop 119/120
	I0416 16:39:29.778245   24939 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 16:39:29.778371   24939 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-543552 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 3 (19.045788163s)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:39:29.832537   25374 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:39:29.832667   25374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:39:29.832678   25374 out.go:304] Setting ErrFile to fd 2...
	I0416 16:39:29.832682   25374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:39:29.832914   25374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:39:29.833128   25374 out.go:298] Setting JSON to false
	I0416 16:39:29.833154   25374 mustload.go:65] Loading cluster: ha-543552
	I0416 16:39:29.833274   25374 notify.go:220] Checking for updates...
	I0416 16:39:29.833611   25374 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:39:29.833632   25374 status.go:255] checking status of ha-543552 ...
	I0416 16:39:29.834154   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:29.834217   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:29.851368   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
	I0416 16:39:29.851838   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:29.852477   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:29.852497   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:29.852868   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:29.853062   25374 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:39:29.854535   25374 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:39:29.854564   25374 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:39:29.854872   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:29.854916   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:29.869837   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34725
	I0416 16:39:29.870194   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:29.870680   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:29.870701   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:29.871009   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:29.871193   25374 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:39:29.873771   25374 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:29.874175   25374 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:39:29.874219   25374 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:29.874293   25374 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:39:29.874588   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:29.874621   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:29.889403   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0416 16:39:29.889789   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:29.890240   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:29.890263   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:29.890568   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:29.890726   25374 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:39:29.890857   25374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:29.890889   25374 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:39:29.893758   25374 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:29.894219   25374 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:39:29.894256   25374 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:29.894400   25374 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:39:29.894554   25374 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:39:29.894723   25374 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:39:29.894885   25374 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:39:29.982356   25374 ssh_runner.go:195] Run: systemctl --version
	I0416 16:39:29.990541   25374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:39:30.008316   25374 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:39:30.008360   25374 api_server.go:166] Checking apiserver status ...
	I0416 16:39:30.008419   25374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:30.024877   25374 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:39:30.035754   25374 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:30.035814   25374 ssh_runner.go:195] Run: ls
	I0416 16:39:30.041110   25374 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:39:30.048036   25374 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:39:30.048062   25374 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:39:30.048079   25374 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:39:30.048094   25374 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:39:30.048391   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:30.048422   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:30.064175   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0416 16:39:30.064629   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:30.065087   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:30.065107   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:30.065440   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:30.065663   25374 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:39:30.067377   25374 status.go:330] ha-543552-m02 host status = "Running" (err=<nil>)
	I0416 16:39:30.067395   25374 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:39:30.067723   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:30.067768   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:30.082523   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0416 16:39:30.082965   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:30.083447   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:30.083469   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:30.083736   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:30.083929   25374 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:39:30.086701   25374 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:30.087157   25374 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:39:30.087204   25374 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:30.087342   25374 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:39:30.087743   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:30.087780   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:30.103300   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0416 16:39:30.103700   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:30.104246   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:30.104272   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:30.104559   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:30.104753   25374 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:39:30.104915   25374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:30.104952   25374 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:39:30.107717   25374 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:30.108154   25374 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:39:30.108185   25374 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:30.108324   25374 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:39:30.108491   25374 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:39:30.108650   25374 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:39:30.108790   25374 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	W0416 16:39:48.449048   25374 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:39:48.449127   25374 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0416 16:39:48.449145   25374 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:39:48.449155   25374 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 16:39:48.449177   25374 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:39:48.449192   25374 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:39:48.449587   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:48.449640   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:48.464213   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0416 16:39:48.464652   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:48.465108   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:48.465135   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:48.465445   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:48.465634   25374 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:39:48.467263   25374 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:39:48.467276   25374 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:39:48.467572   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:48.467610   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:48.482636   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46395
	I0416 16:39:48.483001   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:48.483460   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:48.483481   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:48.483820   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:48.484000   25374 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:39:48.486591   25374 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:48.487025   25374 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:39:48.487052   25374 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:48.487129   25374 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:39:48.487452   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:48.487488   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:48.501612   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0416 16:39:48.501996   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:48.502455   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:48.502477   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:48.502782   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:48.502963   25374 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:39:48.503152   25374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:48.503171   25374 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:39:48.505819   25374 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:48.506213   25374 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:39:48.506241   25374 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:48.506354   25374 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:39:48.506523   25374 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:39:48.506664   25374 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:39:48.506808   25374 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:39:48.591371   25374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:39:48.614438   25374 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:39:48.614462   25374 api_server.go:166] Checking apiserver status ...
	I0416 16:39:48.614489   25374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:48.633008   25374 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:39:48.643966   25374 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:48.644027   25374 ssh_runner.go:195] Run: ls
	I0416 16:39:48.649933   25374 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:39:48.654336   25374 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:39:48.654359   25374 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:39:48.654371   25374 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:39:48.654399   25374 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:39:48.654760   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:48.654798   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:48.670442   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43695
	I0416 16:39:48.670858   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:48.671282   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:48.671301   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:48.671614   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:48.671801   25374 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:39:48.673408   25374 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:39:48.673423   25374 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:39:48.673690   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:48.673730   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:48.687968   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43143
	I0416 16:39:48.688395   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:48.688870   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:48.688896   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:48.689304   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:48.689516   25374 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:39:48.692748   25374 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:48.693217   25374 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:39:48.693248   25374 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:48.693390   25374 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:39:48.693686   25374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:48.693720   25374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:48.707964   25374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0416 16:39:48.708313   25374 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:48.708734   25374 main.go:141] libmachine: Using API Version  1
	I0416 16:39:48.708754   25374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:48.709075   25374 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:48.709302   25374 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:39:48.709462   25374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:48.709477   25374 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:39:48.712201   25374 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:48.712582   25374 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:39:48.712612   25374 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:48.712743   25374 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:39:48.712913   25374 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:39:48.713055   25374 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:39:48.713179   25374 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:39:48.803448   25374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:39:48.824168   25374 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-543552 -n ha-543552
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-543552 logs -n 25: (1.610581466s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552:/home/docker/cp-test_ha-543552-m03_ha-543552.txt                       |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552 sudo cat                                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552.txt                                 |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m02:/home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m04 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp testdata/cp-test.txt                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552:/home/docker/cp-test_ha-543552-m04_ha-543552.txt                       |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552 sudo cat                                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552.txt                                 |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m02:/home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03:/home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m03 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-543552 node stop m02 -v=7                                                     | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:32:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:32:57.811851   20924 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:32:57.811977   20924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:57.811990   20924 out.go:304] Setting ErrFile to fd 2...
	I0416 16:32:57.811996   20924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:57.812199   20924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:32:57.812765   20924 out.go:298] Setting JSON to false
	I0416 16:32:57.813653   20924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":930,"bootTime":1713284248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:32:57.813708   20924 start.go:139] virtualization: kvm guest
	I0416 16:32:57.815973   20924 out.go:177] * [ha-543552] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:32:57.817513   20924 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:32:57.818968   20924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:32:57.817534   20924 notify.go:220] Checking for updates...
	I0416 16:32:57.821609   20924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:32:57.823005   20924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:57.824387   20924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:32:57.825724   20924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:32:57.827100   20924 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:32:57.861189   20924 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 16:32:57.862626   20924 start.go:297] selected driver: kvm2
	I0416 16:32:57.862645   20924 start.go:901] validating driver "kvm2" against <nil>
	I0416 16:32:57.862665   20924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:32:57.863716   20924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:32:57.863810   20924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:32:57.878756   20924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:32:57.878800   20924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:32:57.878987   20924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:32:57.879047   20924 cni.go:84] Creating CNI manager for ""
	I0416 16:32:57.879060   20924 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:32:57.879064   20924 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:32:57.879111   20924 start.go:340] cluster config:
	{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0416 16:32:57.879198   20924 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:32:57.880865   20924 out.go:177] * Starting "ha-543552" primary control-plane node in "ha-543552" cluster
	I0416 16:32:57.881998   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:32:57.882031   20924 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 16:32:57.882037   20924 cache.go:56] Caching tarball of preloaded images
	I0416 16:32:57.882096   20924 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:32:57.882107   20924 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:32:57.882400   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:32:57.882418   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json: {Name:mkf68664e68f97a8237c738cfc5938b681c72c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:32:57.882548   20924 start.go:360] acquireMachinesLock for ha-543552: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:32:57.882584   20924 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "ha-543552"
	I0416 16:32:57.882601   20924 start.go:93] Provisioning new machine with config: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:32:57.882670   20924 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 16:32:57.884395   20924 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:32:57.884520   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:32:57.884553   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:32:57.898753   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0416 16:32:57.899136   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:32:57.899675   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:32:57.899695   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:32:57.900042   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:32:57.900224   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:32:57.900387   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:32:57.900547   20924 start.go:159] libmachine.API.Create for "ha-543552" (driver="kvm2")
	I0416 16:32:57.900575   20924 client.go:168] LocalClient.Create starting
	I0416 16:32:57.900607   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 16:32:57.900645   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:32:57.900659   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:32:57.900711   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 16:32:57.900729   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:32:57.900741   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:32:57.900754   20924 main.go:141] libmachine: Running pre-create checks...
	I0416 16:32:57.900771   20924 main.go:141] libmachine: (ha-543552) Calling .PreCreateCheck
	I0416 16:32:57.901115   20924 main.go:141] libmachine: (ha-543552) Calling .GetConfigRaw
	I0416 16:32:57.901499   20924 main.go:141] libmachine: Creating machine...
	I0416 16:32:57.901514   20924 main.go:141] libmachine: (ha-543552) Calling .Create
	I0416 16:32:57.901657   20924 main.go:141] libmachine: (ha-543552) Creating KVM machine...
	I0416 16:32:57.902958   20924 main.go:141] libmachine: (ha-543552) DBG | found existing default KVM network
	I0416 16:32:57.903590   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:57.903459   20947 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0416 16:32:57.903618   20924 main.go:141] libmachine: (ha-543552) DBG | created network xml: 
	I0416 16:32:57.903639   20924 main.go:141] libmachine: (ha-543552) DBG | <network>
	I0416 16:32:57.903667   20924 main.go:141] libmachine: (ha-543552) DBG |   <name>mk-ha-543552</name>
	I0416 16:32:57.903684   20924 main.go:141] libmachine: (ha-543552) DBG |   <dns enable='no'/>
	I0416 16:32:57.903694   20924 main.go:141] libmachine: (ha-543552) DBG |   
	I0416 16:32:57.903703   20924 main.go:141] libmachine: (ha-543552) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0416 16:32:57.903709   20924 main.go:141] libmachine: (ha-543552) DBG |     <dhcp>
	I0416 16:32:57.903718   20924 main.go:141] libmachine: (ha-543552) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0416 16:32:57.903756   20924 main.go:141] libmachine: (ha-543552) DBG |     </dhcp>
	I0416 16:32:57.903781   20924 main.go:141] libmachine: (ha-543552) DBG |   </ip>
	I0416 16:32:57.903802   20924 main.go:141] libmachine: (ha-543552) DBG |   
	I0416 16:32:57.903821   20924 main.go:141] libmachine: (ha-543552) DBG | </network>
	I0416 16:32:57.903840   20924 main.go:141] libmachine: (ha-543552) DBG | 
	I0416 16:32:57.908616   20924 main.go:141] libmachine: (ha-543552) DBG | trying to create private KVM network mk-ha-543552 192.168.39.0/24...
	I0416 16:32:57.972477   20924 main.go:141] libmachine: (ha-543552) DBG | private KVM network mk-ha-543552 192.168.39.0/24 created
	I0416 16:32:57.972507   20924 main.go:141] libmachine: (ha-543552) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552 ...
	I0416 16:32:57.972520   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:57.972440   20947 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:57.972560   20924 main.go:141] libmachine: (ha-543552) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:32:57.972598   20924 main.go:141] libmachine: (ha-543552) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:32:58.192119   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:58.191972   20947 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa...
	I0416 16:32:58.434619   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:58.434483   20947 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/ha-543552.rawdisk...
	I0416 16:32:58.434649   20924 main.go:141] libmachine: (ha-543552) DBG | Writing magic tar header
	I0416 16:32:58.434658   20924 main.go:141] libmachine: (ha-543552) DBG | Writing SSH key tar header
	I0416 16:32:58.434666   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:58.434593   20947 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552 ...
	I0416 16:32:58.434679   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552
	I0416 16:32:58.434750   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552 (perms=drwx------)
	I0416 16:32:58.434773   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 16:32:58.434781   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:32:58.434788   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:58.434811   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 16:32:58.434824   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:32:58.434838   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:32:58.434847   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home
	I0416 16:32:58.434853   20924 main.go:141] libmachine: (ha-543552) DBG | Skipping /home - not owner
	I0416 16:32:58.434864   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 16:32:58.434876   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 16:32:58.434884   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:32:58.434894   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:32:58.434906   20924 main.go:141] libmachine: (ha-543552) Creating domain...
	I0416 16:32:58.436047   20924 main.go:141] libmachine: (ha-543552) define libvirt domain using xml: 
	I0416 16:32:58.436060   20924 main.go:141] libmachine: (ha-543552) <domain type='kvm'>
	I0416 16:32:58.436066   20924 main.go:141] libmachine: (ha-543552)   <name>ha-543552</name>
	I0416 16:32:58.436071   20924 main.go:141] libmachine: (ha-543552)   <memory unit='MiB'>2200</memory>
	I0416 16:32:58.436076   20924 main.go:141] libmachine: (ha-543552)   <vcpu>2</vcpu>
	I0416 16:32:58.436091   20924 main.go:141] libmachine: (ha-543552)   <features>
	I0416 16:32:58.436097   20924 main.go:141] libmachine: (ha-543552)     <acpi/>
	I0416 16:32:58.436103   20924 main.go:141] libmachine: (ha-543552)     <apic/>
	I0416 16:32:58.436108   20924 main.go:141] libmachine: (ha-543552)     <pae/>
	I0416 16:32:58.436116   20924 main.go:141] libmachine: (ha-543552)     
	I0416 16:32:58.436121   20924 main.go:141] libmachine: (ha-543552)   </features>
	I0416 16:32:58.436128   20924 main.go:141] libmachine: (ha-543552)   <cpu mode='host-passthrough'>
	I0416 16:32:58.436133   20924 main.go:141] libmachine: (ha-543552)   
	I0416 16:32:58.436145   20924 main.go:141] libmachine: (ha-543552)   </cpu>
	I0416 16:32:58.436156   20924 main.go:141] libmachine: (ha-543552)   <os>
	I0416 16:32:58.436162   20924 main.go:141] libmachine: (ha-543552)     <type>hvm</type>
	I0416 16:32:58.436195   20924 main.go:141] libmachine: (ha-543552)     <boot dev='cdrom'/>
	I0416 16:32:58.436218   20924 main.go:141] libmachine: (ha-543552)     <boot dev='hd'/>
	I0416 16:32:58.436229   20924 main.go:141] libmachine: (ha-543552)     <bootmenu enable='no'/>
	I0416 16:32:58.436243   20924 main.go:141] libmachine: (ha-543552)   </os>
	I0416 16:32:58.436258   20924 main.go:141] libmachine: (ha-543552)   <devices>
	I0416 16:32:58.436272   20924 main.go:141] libmachine: (ha-543552)     <disk type='file' device='cdrom'>
	I0416 16:32:58.436285   20924 main.go:141] libmachine: (ha-543552)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/boot2docker.iso'/>
	I0416 16:32:58.436299   20924 main.go:141] libmachine: (ha-543552)       <target dev='hdc' bus='scsi'/>
	I0416 16:32:58.436314   20924 main.go:141] libmachine: (ha-543552)       <readonly/>
	I0416 16:32:58.436331   20924 main.go:141] libmachine: (ha-543552)     </disk>
	I0416 16:32:58.436346   20924 main.go:141] libmachine: (ha-543552)     <disk type='file' device='disk'>
	I0416 16:32:58.436360   20924 main.go:141] libmachine: (ha-543552)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:32:58.436378   20924 main.go:141] libmachine: (ha-543552)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/ha-543552.rawdisk'/>
	I0416 16:32:58.436390   20924 main.go:141] libmachine: (ha-543552)       <target dev='hda' bus='virtio'/>
	I0416 16:32:58.436401   20924 main.go:141] libmachine: (ha-543552)     </disk>
	I0416 16:32:58.436407   20924 main.go:141] libmachine: (ha-543552)     <interface type='network'>
	I0416 16:32:58.436420   20924 main.go:141] libmachine: (ha-543552)       <source network='mk-ha-543552'/>
	I0416 16:32:58.436436   20924 main.go:141] libmachine: (ha-543552)       <model type='virtio'/>
	I0416 16:32:58.436454   20924 main.go:141] libmachine: (ha-543552)     </interface>
	I0416 16:32:58.436469   20924 main.go:141] libmachine: (ha-543552)     <interface type='network'>
	I0416 16:32:58.436486   20924 main.go:141] libmachine: (ha-543552)       <source network='default'/>
	I0416 16:32:58.436499   20924 main.go:141] libmachine: (ha-543552)       <model type='virtio'/>
	I0416 16:32:58.436515   20924 main.go:141] libmachine: (ha-543552)     </interface>
	I0416 16:32:58.436530   20924 main.go:141] libmachine: (ha-543552)     <serial type='pty'>
	I0416 16:32:58.436542   20924 main.go:141] libmachine: (ha-543552)       <target port='0'/>
	I0416 16:32:58.436556   20924 main.go:141] libmachine: (ha-543552)     </serial>
	I0416 16:32:58.436573   20924 main.go:141] libmachine: (ha-543552)     <console type='pty'>
	I0416 16:32:58.436585   20924 main.go:141] libmachine: (ha-543552)       <target type='serial' port='0'/>
	I0416 16:32:58.436606   20924 main.go:141] libmachine: (ha-543552)     </console>
	I0416 16:32:58.436621   20924 main.go:141] libmachine: (ha-543552)     <rng model='virtio'>
	I0416 16:32:58.436635   20924 main.go:141] libmachine: (ha-543552)       <backend model='random'>/dev/random</backend>
	I0416 16:32:58.436648   20924 main.go:141] libmachine: (ha-543552)     </rng>
	I0416 16:32:58.436674   20924 main.go:141] libmachine: (ha-543552)     
	I0416 16:32:58.436693   20924 main.go:141] libmachine: (ha-543552)     
	I0416 16:32:58.436706   20924 main.go:141] libmachine: (ha-543552)   </devices>
	I0416 16:32:58.436719   20924 main.go:141] libmachine: (ha-543552) </domain>
	I0416 16:32:58.436731   20924 main.go:141] libmachine: (ha-543552) 
	I0416 16:32:58.441002   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:4a:90:dd in network default
	I0416 16:32:58.441610   20924 main.go:141] libmachine: (ha-543552) Ensuring networks are active...
	I0416 16:32:58.441639   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:32:58.442337   20924 main.go:141] libmachine: (ha-543552) Ensuring network default is active
	I0416 16:32:58.442644   20924 main.go:141] libmachine: (ha-543552) Ensuring network mk-ha-543552 is active
	I0416 16:32:58.443084   20924 main.go:141] libmachine: (ha-543552) Getting domain xml...
	I0416 16:32:58.443794   20924 main.go:141] libmachine: (ha-543552) Creating domain...
	I0416 16:32:59.616203   20924 main.go:141] libmachine: (ha-543552) Waiting to get IP...
	I0416 16:32:59.617108   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:32:59.617542   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:32:59.617579   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:59.617535   20947 retry.go:31] will retry after 203.520709ms: waiting for machine to come up
	I0416 16:32:59.822929   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:32:59.823289   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:32:59.823319   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:59.823277   20947 retry.go:31] will retry after 286.775995ms: waiting for machine to come up
	I0416 16:33:00.111725   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:00.112119   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:00.112144   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:00.112091   20947 retry.go:31] will retry after 373.736633ms: waiting for machine to come up
	I0416 16:33:00.487537   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:00.487898   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:00.487925   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:00.487849   20947 retry.go:31] will retry after 510.897921ms: waiting for machine to come up
	I0416 16:33:01.000715   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:01.001195   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:01.001219   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:01.001149   20947 retry.go:31] will retry after 676.370357ms: waiting for machine to come up
	I0416 16:33:01.679005   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:01.679416   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:01.679442   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:01.679364   20947 retry.go:31] will retry after 583.153779ms: waiting for machine to come up
	I0416 16:33:02.264118   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:02.264453   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:02.264491   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:02.264416   20947 retry.go:31] will retry after 784.977619ms: waiting for machine to come up
	I0416 16:33:03.051094   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:03.051492   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:03.051522   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:03.051431   20947 retry.go:31] will retry after 955.233152ms: waiting for machine to come up
	I0416 16:33:04.008677   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:04.009096   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:04.009124   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:04.009061   20947 retry.go:31] will retry after 1.709366699s: waiting for machine to come up
	I0416 16:33:05.720765   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:05.721119   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:05.721145   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:05.721084   20947 retry.go:31] will retry after 1.476164434s: waiting for machine to come up
	I0416 16:33:07.199821   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:07.200308   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:07.200331   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:07.200274   20947 retry.go:31] will retry after 2.756833s: waiting for machine to come up
	I0416 16:33:09.960071   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:09.960473   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:09.960502   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:09.960424   20947 retry.go:31] will retry after 2.969177743s: waiting for machine to come up
	I0416 16:33:12.931400   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:12.931807   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:12.931840   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:12.931755   20947 retry.go:31] will retry after 3.498551484s: waiting for machine to come up
	I0416 16:33:16.434396   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:16.434808   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:16.434828   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:16.434772   20947 retry.go:31] will retry after 4.44313934s: waiting for machine to come up
	I0416 16:33:20.881352   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.881820   20924 main.go:141] libmachine: (ha-543552) Found IP for machine: 192.168.39.97
	I0416 16:33:20.881865   20924 main.go:141] libmachine: (ha-543552) Reserving static IP address...
	I0416 16:33:20.881881   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has current primary IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.882159   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find host DHCP lease matching {name: "ha-543552", mac: "52:54:00:3d:bc:28", ip: "192.168.39.97"} in network mk-ha-543552
	I0416 16:33:20.950850   20924 main.go:141] libmachine: (ha-543552) DBG | Getting to WaitForSSH function...
	I0416 16:33:20.950888   20924 main.go:141] libmachine: (ha-543552) Reserved static IP address: 192.168.39.97
	I0416 16:33:20.950923   20924 main.go:141] libmachine: (ha-543552) Waiting for SSH to be available...
	I0416 16:33:20.953231   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.953634   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:20.953659   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.953782   20924 main.go:141] libmachine: (ha-543552) DBG | Using SSH client type: external
	I0416 16:33:20.953799   20924 main.go:141] libmachine: (ha-543552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa (-rw-------)
	I0416 16:33:20.953834   20924 main.go:141] libmachine: (ha-543552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:33:20.953864   20924 main.go:141] libmachine: (ha-543552) DBG | About to run SSH command:
	I0416 16:33:20.953878   20924 main.go:141] libmachine: (ha-543552) DBG | exit 0
	I0416 16:33:21.081004   20924 main.go:141] libmachine: (ha-543552) DBG | SSH cmd err, output: <nil>: 
	I0416 16:33:21.081285   20924 main.go:141] libmachine: (ha-543552) KVM machine creation complete!
	I0416 16:33:21.081606   20924 main.go:141] libmachine: (ha-543552) Calling .GetConfigRaw
	I0416 16:33:21.082145   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:21.082313   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:21.082484   20924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:33:21.082496   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:21.083606   20924 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:33:21.083618   20924 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:33:21.083623   20924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:33:21.083628   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.085909   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.086318   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.086335   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.086464   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.086638   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.086822   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.087023   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.087190   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.087364   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.087375   20924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:33:21.196513   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:33:21.196540   20924 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:33:21.196550   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.199187   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.199528   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.199558   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.199696   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.199893   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.200061   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.200149   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.200319   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.200485   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.200495   20924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:33:21.310711   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:33:21.310773   20924 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:33:21.310787   20924 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:33:21.310800   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:33:21.311070   20924 buildroot.go:166] provisioning hostname "ha-543552"
	I0416 16:33:21.311094   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:33:21.311296   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.313651   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.313957   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.313985   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.314090   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.314269   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.314450   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.314590   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.314734   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.314924   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.314938   20924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552 && echo "ha-543552" | sudo tee /etc/hostname
	I0416 16:33:21.436909   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552
	
	I0416 16:33:21.436936   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.439460   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.439772   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.439802   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.439937   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.440119   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.440378   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.440540   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.440727   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.440925   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.440942   20924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:33:21.559273   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:33:21.559299   20924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:33:21.559315   20924 buildroot.go:174] setting up certificates
	I0416 16:33:21.559338   20924 provision.go:84] configureAuth start
	I0416 16:33:21.559346   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:33:21.559637   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:21.562099   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.562405   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.562437   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.562585   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.564678   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.564968   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.564993   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.565087   20924 provision.go:143] copyHostCerts
	I0416 16:33:21.565110   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:33:21.565149   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:33:21.565165   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:33:21.565231   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:33:21.565315   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:33:21.565332   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:33:21.565339   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:33:21.565361   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:33:21.565412   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:33:21.565434   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:33:21.565441   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:33:21.565461   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:33:21.565517   20924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552 san=[127.0.0.1 192.168.39.97 ha-543552 localhost minikube]
	I0416 16:33:21.857459   20924 provision.go:177] copyRemoteCerts
	I0416 16:33:21.857512   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:33:21.857531   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.860096   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.860371   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.860401   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.860552   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.860729   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.860922   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.861051   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:21.944615   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:33:21.944674   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:33:21.971869   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:33:21.971929   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:33:21.997689   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:33:21.997758   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:33:22.022986   20924 provision.go:87] duration metric: took 463.635224ms to configureAuth
	I0416 16:33:22.023016   20924 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:33:22.023191   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:33:22.023303   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.025890   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.026338   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.026365   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.026539   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.026727   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.026880   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.027026   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.027234   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:22.027382   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:22.027397   20924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:33:22.303097   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:33:22.303126   20924 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:33:22.303135   20924 main.go:141] libmachine: (ha-543552) Calling .GetURL
	I0416 16:33:22.304367   20924 main.go:141] libmachine: (ha-543552) DBG | Using libvirt version 6000000
	I0416 16:33:22.307123   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.307554   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.307591   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.307768   20924 main.go:141] libmachine: Docker is up and running!
	I0416 16:33:22.307780   20924 main.go:141] libmachine: Reticulating splines...
	I0416 16:33:22.307786   20924 client.go:171] duration metric: took 24.407201533s to LocalClient.Create
	I0416 16:33:22.307808   20924 start.go:167] duration metric: took 24.407260974s to libmachine.API.Create "ha-543552"
	I0416 16:33:22.307821   20924 start.go:293] postStartSetup for "ha-543552" (driver="kvm2")
	I0416 16:33:22.307836   20924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:33:22.307853   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.308090   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:33:22.308113   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.310239   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.310570   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.310618   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.310700   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.310915   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.311071   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.311234   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:22.396940   20924 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:33:22.401934   20924 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:33:22.401955   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:33:22.402019   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:33:22.402135   20924 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:33:22.402147   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:33:22.402252   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:33:22.413322   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:33:22.440572   20924 start.go:296] duration metric: took 132.736085ms for postStartSetup
	I0416 16:33:22.440628   20924 main.go:141] libmachine: (ha-543552) Calling .GetConfigRaw
	I0416 16:33:22.441238   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:22.443669   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.443957   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.443987   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.444201   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:33:22.444404   20924 start.go:128] duration metric: took 24.561721857s to createHost
	I0416 16:33:22.444431   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.446660   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.447027   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.447055   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.447184   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.447370   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.447525   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.447667   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.447819   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:22.447971   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:22.447985   20924 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:33:22.554052   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285202.524318982
	
	I0416 16:33:22.554106   20924 fix.go:216] guest clock: 1713285202.524318982
	I0416 16:33:22.554118   20924 fix.go:229] Guest: 2024-04-16 16:33:22.524318982 +0000 UTC Remote: 2024-04-16 16:33:22.444419438 +0000 UTC m=+24.679599031 (delta=79.899544ms)
	I0416 16:33:22.554170   20924 fix.go:200] guest clock delta is within tolerance: 79.899544ms
	I0416 16:33:22.554179   20924 start.go:83] releasing machines lock for "ha-543552", held for 24.671583823s
	I0416 16:33:22.554209   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.554476   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:22.557142   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.557527   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.557549   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.557678   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.558116   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.558288   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.558374   20924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:33:22.558415   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.558473   20924 ssh_runner.go:195] Run: cat /version.json
	I0416 16:33:22.558492   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.561057   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561248   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561388   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.561415   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561566   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.561578   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.561615   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561735   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.561812   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.561886   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.561983   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.562048   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:22.562378   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.562541   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:22.661992   20924 ssh_runner.go:195] Run: systemctl --version
	I0416 16:33:22.668411   20924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:33:22.850856   20924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:33:22.857543   20924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:33:22.857605   20924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:33:22.876670   20924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:33:22.876692   20924 start.go:494] detecting cgroup driver to use...
	I0416 16:33:22.876750   20924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:33:22.894470   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:33:22.909759   20924 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:33:22.909800   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:33:22.925012   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:33:22.940185   20924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:33:23.070168   20924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:33:23.226306   20924 docker.go:233] disabling docker service ...
	I0416 16:33:23.226362   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:33:23.242582   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:33:23.257400   20924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:33:23.415840   20924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:33:23.550004   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:33:23.565816   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:33:23.586337   20924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:33:23.586393   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.598380   20924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:33:23.598438   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.610706   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.623468   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.636111   20924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:33:23.648735   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.661408   20924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.680156   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.692325   20924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:33:23.703218   20924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:33:23.703260   20924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:33:23.717544   20924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:33:23.728628   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:33:23.861080   20924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:33:24.009175   20924 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:33:24.009237   20924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:33:24.014530   20924 start.go:562] Will wait 60s for crictl version
	I0416 16:33:24.014581   20924 ssh_runner.go:195] Run: which crictl
	I0416 16:33:24.018826   20924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:33:24.060662   20924 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:33:24.060753   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:33:24.092035   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:33:24.124827   20924 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:33:24.126217   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:24.128565   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:24.128929   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:24.128964   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:24.129143   20924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:33:24.133807   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:33:24.148663   20924 kubeadm.go:877] updating cluster {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:33:24.148750   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:33:24.148787   20924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:33:24.186056   20924 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 16:33:24.186112   20924 ssh_runner.go:195] Run: which lz4
	I0416 16:33:24.190645   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:33:24.190725   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:33:24.195390   20924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:33:24.195421   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 16:33:25.821813   20924 crio.go:462] duration metric: took 1.631118235s to copy over tarball
	I0416 16:33:25.821869   20924 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:33:28.267640   20924 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.445730533s)
	I0416 16:33:28.267671   20924 crio.go:469] duration metric: took 2.445835938s to extract the tarball
	I0416 16:33:28.267680   20924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:33:28.307685   20924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:33:28.358068   20924 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 16:33:28.358087   20924 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:33:28.358096   20924 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.29.3 crio true true} ...
	I0416 16:33:28.358205   20924 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:33:28.358291   20924 ssh_runner.go:195] Run: crio config
	I0416 16:33:28.408507   20924 cni.go:84] Creating CNI manager for ""
	I0416 16:33:28.408525   20924 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:33:28.408535   20924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:33:28.408560   20924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-543552 NodeName:ha-543552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:33:28.408717   20924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-543552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:33:28.408782   20924 kube-vip.go:111] generating kube-vip config ...
	I0416 16:33:28.408833   20924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:33:28.429384   20924 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:33:28.429473   20924 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:33:28.429518   20924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:33:28.440588   20924 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:33:28.440647   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:33:28.451318   20924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0416 16:33:28.469233   20924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:33:28.486759   20924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0416 16:33:28.504631   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0416 16:33:28.522061   20924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:33:28.526296   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:33:28.539836   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:33:28.673751   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:33:28.694400   20924 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.97
	I0416 16:33:28.694425   20924 certs.go:194] generating shared ca certs ...
	I0416 16:33:28.694444   20924 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:28.694591   20924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:33:28.694764   20924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:33:28.694790   20924 certs.go:256] generating profile certs ...
	I0416 16:33:28.694945   20924 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:33:28.694970   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt with IP's: []
	I0416 16:33:28.900640   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt ...
	I0416 16:33:28.900667   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt: {Name:mkeddd79b0699f023de470f3c894250355f52b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:28.900825   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key ...
	I0416 16:33:28.900845   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key: {Name:mk778c520f35b379c5cb8ee5fa6157173989ee30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:28.900917   20924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c
	I0416 16:33:28.900932   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.254]
	I0416 16:33:29.076089   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c ...
	I0416 16:33:29.076118   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c: {Name:mk77f2b79f2ee01a60e1efd721f633a59434e4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.076254   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c ...
	I0416 16:33:29.076266   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c: {Name:mk218623fb54360b6300d702d2b43eaa73a10572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.076336   20924 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:33:29.076401   20924 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:33:29.076451   20924 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:33:29.076466   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt with IP's: []
	I0416 16:33:29.321438   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt ...
	I0416 16:33:29.321500   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt: {Name:mk72aa09e0d8e03c926655a8adab62b8941eb403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.321640   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key ...
	I0416 16:33:29.321651   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key: {Name:mk01d0762bf550e927b05c2d906ac33d7efe3fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.321711   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:33:29.321727   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:33:29.321737   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:33:29.321750   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:33:29.321759   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:33:29.321769   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:33:29.321782   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:33:29.321791   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:33:29.321845   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:33:29.321880   20924 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:33:29.321890   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:33:29.321909   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:33:29.321934   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:33:29.321955   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:33:29.321995   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:33:29.322023   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.322036   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.322054   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.322566   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:33:29.358552   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:33:29.385459   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:33:29.412086   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:33:29.438700   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:33:29.467983   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:33:29.523599   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:33:29.550279   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:33:29.578043   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:33:29.605815   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:33:29.632735   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:33:29.658854   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:33:29.679406   20924 ssh_runner.go:195] Run: openssl version
	I0416 16:33:29.686044   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:33:29.699832   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.705110   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.705174   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.711814   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:33:29.726799   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:33:29.740775   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.746029   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.746080   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.752395   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:33:29.765346   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:33:29.778120   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.783119   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.783176   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.789206   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:33:29.802387   20924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:33:29.807192   20924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:33:29.807249   20924 kubeadm.go:391] StartCluster: {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:33:29.807338   20924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 16:33:29.807409   20924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:33:29.851735   20924 cri.go:89] found id: ""
	I0416 16:33:29.851797   20924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:33:29.863644   20924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:33:29.874774   20924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:33:29.886033   20924 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:33:29.886053   20924 kubeadm.go:156] found existing configuration files:
	
	I0416 16:33:29.886092   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:33:29.897071   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:33:29.897122   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:33:29.908766   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:33:29.921517   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:33:29.921572   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:33:29.932682   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:33:29.943247   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:33:29.943291   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:33:29.954112   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:33:29.964622   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:33:29.964678   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:33:29.975428   20924 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:33:30.235876   20924 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:33:41.323894   20924 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:33:41.323967   20924 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:33:41.324068   20924 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:33:41.324233   20924 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:33:41.324364   20924 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:33:41.324450   20924 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:33:41.326013   20924 out.go:204]   - Generating certificates and keys ...
	I0416 16:33:41.326107   20924 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:33:41.326199   20924 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:33:41.326286   20924 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:33:41.326358   20924 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:33:41.326438   20924 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:33:41.326510   20924 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:33:41.326580   20924 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:33:41.326732   20924 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-543552 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0416 16:33:41.326804   20924 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:33:41.326972   20924 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-543552 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0416 16:33:41.327069   20924 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:33:41.327163   20924 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:33:41.327220   20924 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:33:41.327302   20924 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:33:41.327388   20924 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:33:41.327483   20924 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:33:41.327555   20924 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:33:41.327638   20924 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:33:41.327716   20924 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:33:41.327824   20924 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:33:41.327929   20924 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:33:41.329644   20924 out.go:204]   - Booting up control plane ...
	I0416 16:33:41.329762   20924 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:33:41.329862   20924 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:33:41.329946   20924 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:33:41.330098   20924 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:33:41.330213   20924 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:33:41.330263   20924 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:33:41.330454   20924 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:33:41.330551   20924 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.572752 seconds
	I0416 16:33:41.330688   20924 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:33:41.330833   20924 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:33:41.330921   20924 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:33:41.331143   20924 kubeadm.go:309] [mark-control-plane] Marking the node ha-543552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:33:41.331217   20924 kubeadm.go:309] [bootstrap-token] Using token: wi0m3o.dddy96d54tiolpuf
	I0416 16:33:41.332767   20924 out.go:204]   - Configuring RBAC rules ...
	I0416 16:33:41.332879   20924 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:33:41.332976   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:33:41.333100   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:33:41.333211   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:33:41.333390   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:33:41.333502   20924 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:33:41.333666   20924 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:33:41.333723   20924 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:33:41.333791   20924 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:33:41.333803   20924 kubeadm.go:309] 
	I0416 16:33:41.333871   20924 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:33:41.333890   20924 kubeadm.go:309] 
	I0416 16:33:41.333997   20924 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:33:41.334009   20924 kubeadm.go:309] 
	I0416 16:33:41.334033   20924 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:33:41.334083   20924 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:33:41.334132   20924 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:33:41.334138   20924 kubeadm.go:309] 
	I0416 16:33:41.334200   20924 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:33:41.334207   20924 kubeadm.go:309] 
	I0416 16:33:41.334249   20924 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:33:41.334255   20924 kubeadm.go:309] 
	I0416 16:33:41.334296   20924 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:33:41.334394   20924 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:33:41.334489   20924 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:33:41.334504   20924 kubeadm.go:309] 
	I0416 16:33:41.334623   20924 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:33:41.334725   20924 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:33:41.334735   20924 kubeadm.go:309] 
	I0416 16:33:41.334856   20924 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wi0m3o.dddy96d54tiolpuf \
	I0416 16:33:41.335001   20924 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 16:33:41.335040   20924 kubeadm.go:309] 	--control-plane 
	I0416 16:33:41.335049   20924 kubeadm.go:309] 
	I0416 16:33:41.335128   20924 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:33:41.335135   20924 kubeadm.go:309] 
	I0416 16:33:41.335200   20924 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wi0m3o.dddy96d54tiolpuf \
	I0416 16:33:41.335305   20924 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 16:33:41.335317   20924 cni.go:84] Creating CNI manager for ""
	I0416 16:33:41.335323   20924 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:33:41.337825   20924 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:33:41.339512   20924 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:33:41.369677   20924 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:33:41.369700   20924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:33:41.435637   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:33:41.897389   20924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:33:41.897457   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:41.897501   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-543552 minikube.k8s.io/updated_at=2024_04_16T16_33_41_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-543552 minikube.k8s.io/primary=true
	I0416 16:33:42.040936   20924 ops.go:34] apiserver oom_adj: -16
	I0416 16:33:42.041306   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:42.541747   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:43.042279   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:43.542385   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:44.041711   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:44.542185   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:45.041624   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:45.541699   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:46.041747   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:46.542056   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:47.041583   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:47.541718   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:48.041939   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:48.541494   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:49.041708   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:49.541982   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:50.042320   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:50.541440   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:51.041601   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:51.541493   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:52.041436   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:52.542239   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:53.041938   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:53.541442   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:54.041409   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:54.240330   20924 kubeadm.go:1107] duration metric: took 12.342931074s to wait for elevateKubeSystemPrivileges
	W0416 16:33:54.240375   20924 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:33:54.240385   20924 kubeadm.go:393] duration metric: took 24.433140902s to StartCluster
	I0416 16:33:54.240406   20924 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:54.240495   20924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:33:54.241518   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:54.241791   20924 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:33:54.241827   20924 start.go:240] waiting for startup goroutines ...
	I0416 16:33:54.241812   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:33:54.241843   20924 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:33:54.241939   20924 addons.go:69] Setting storage-provisioner=true in profile "ha-543552"
	I0416 16:33:54.241976   20924 addons.go:234] Setting addon storage-provisioner=true in "ha-543552"
	I0416 16:33:54.242014   20924 addons.go:69] Setting default-storageclass=true in profile "ha-543552"
	I0416 16:33:54.242026   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:33:54.242060   20924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-543552"
	I0416 16:33:54.242022   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:33:54.242450   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.242484   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.242511   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.242541   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.257510   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0416 16:33:54.257532   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I0416 16:33:54.258023   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.258105   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.258562   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.258584   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.258788   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.258812   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.258949   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.259144   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.259315   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:54.259554   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.259606   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.261751   20924 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:33:54.262097   20924 kapi.go:59] client config for ha-543552: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:33:54.262645   20924 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:33:54.262794   20924 addons.go:234] Setting addon default-storageclass=true in "ha-543552"
	I0416 16:33:54.262836   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:33:54.263200   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.263239   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.274303   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0416 16:33:54.274880   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.275393   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.275420   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.275786   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.275982   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:54.277617   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:54.279608   20924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:33:54.278047   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0416 16:33:54.280037   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.281169   20924 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:33:54.281182   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:33:54.281194   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:54.281646   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.281672   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.282001   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.282544   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.282572   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.284227   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.284666   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:54.284688   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.284720   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:54.284885   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:54.285087   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:54.285221   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:54.298327   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0416 16:33:54.298741   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.299215   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.299239   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.299563   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.299759   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:54.301278   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:54.301546   20924 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:33:54.301562   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:33:54.301580   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:54.304804   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.305209   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:54.305235   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.305413   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:54.305611   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:54.305768   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:54.305927   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:54.431124   20924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:33:54.491826   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:33:54.500737   20924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:33:54.785677   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:54.785705   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:54.785989   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:54.786000   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:54.786016   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:54.786032   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:54.786040   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:54.786277   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:54.786295   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:54.786330   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:54.786402   20924 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:33:54.786414   20924 round_trippers.go:469] Request Headers:
	I0416 16:33:54.786422   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:33:54.786426   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:33:54.794932   20924 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 16:33:54.795695   20924 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:33:54.795712   20924 round_trippers.go:469] Request Headers:
	I0416 16:33:54.795723   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:33:54.795728   20924 round_trippers.go:473]     Content-Type: application/json
	I0416 16:33:54.795732   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:33:54.798453   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:33:54.798579   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:54.798592   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:54.798846   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:54.798879   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:54.798893   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:54.909429   20924 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0416 16:33:55.124934   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:55.124958   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:55.125260   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:55.125279   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:55.125287   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:55.125285   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:55.125296   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:55.125504   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:55.125518   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:55.125531   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:55.127553   20924 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0416 16:33:55.128988   20924 addons.go:505] duration metric: took 887.155371ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0416 16:33:55.129028   20924 start.go:245] waiting for cluster config update ...
	I0416 16:33:55.129045   20924 start.go:254] writing updated cluster config ...
	I0416 16:33:55.130989   20924 out.go:177] 
	I0416 16:33:55.132535   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:33:55.132650   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:33:55.134505   20924 out.go:177] * Starting "ha-543552-m02" control-plane node in "ha-543552" cluster
	I0416 16:33:55.135806   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:33:55.135835   20924 cache.go:56] Caching tarball of preloaded images
	I0416 16:33:55.135935   20924 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:33:55.135956   20924 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:33:55.136048   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:33:55.136226   20924 start.go:360] acquireMachinesLock for ha-543552-m02: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:33:55.136269   20924 start.go:364] duration metric: took 24.383µs to acquireMachinesLock for "ha-543552-m02"
	I0416 16:33:55.136288   20924 start.go:93] Provisioning new machine with config: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:33:55.136358   20924 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0416 16:33:55.137854   20924 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:33:55.137934   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:55.137960   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:55.151981   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0416 16:33:55.152324   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:55.152809   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:55.152845   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:55.153127   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:55.153373   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:33:55.153509   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:33:55.153690   20924 start.go:159] libmachine.API.Create for "ha-543552" (driver="kvm2")
	I0416 16:33:55.153718   20924 client.go:168] LocalClient.Create starting
	I0416 16:33:55.153752   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 16:33:55.153789   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:33:55.153802   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:33:55.153850   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 16:33:55.153877   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:33:55.153888   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:33:55.153904   20924 main.go:141] libmachine: Running pre-create checks...
	I0416 16:33:55.153912   20924 main.go:141] libmachine: (ha-543552-m02) Calling .PreCreateCheck
	I0416 16:33:55.154090   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetConfigRaw
	I0416 16:33:55.154433   20924 main.go:141] libmachine: Creating machine...
	I0416 16:33:55.154448   20924 main.go:141] libmachine: (ha-543552-m02) Calling .Create
	I0416 16:33:55.154580   20924 main.go:141] libmachine: (ha-543552-m02) Creating KVM machine...
	I0416 16:33:55.155669   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found existing default KVM network
	I0416 16:33:55.155761   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found existing private KVM network mk-ha-543552
	I0416 16:33:55.155860   20924 main.go:141] libmachine: (ha-543552-m02) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02 ...
	I0416 16:33:55.155914   20924 main.go:141] libmachine: (ha-543552-m02) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:33:55.155935   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.155835   21317 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:33:55.156018   20924 main.go:141] libmachine: (ha-543552-m02) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:33:55.391752   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.391649   21317 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa...
	I0416 16:33:55.544429   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.544327   21317 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/ha-543552-m02.rawdisk...
	I0416 16:33:55.544456   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Writing magic tar header
	I0416 16:33:55.544466   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Writing SSH key tar header
	I0416 16:33:55.544474   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.544430   21317 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02 ...
	I0416 16:33:55.544557   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02
	I0416 16:33:55.544596   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02 (perms=drwx------)
	I0416 16:33:55.544621   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:33:55.544637   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 16:33:55.544655   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:33:55.544671   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 16:33:55.544682   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 16:33:55.544696   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 16:33:55.544705   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:33:55.544722   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:33:55.544730   20924 main.go:141] libmachine: (ha-543552-m02) Creating domain...
	I0416 16:33:55.544751   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:33:55.544767   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:33:55.544779   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home
	I0416 16:33:55.544788   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Skipping /home - not owner
	I0416 16:33:55.545596   20924 main.go:141] libmachine: (ha-543552-m02) define libvirt domain using xml: 
	I0416 16:33:55.545616   20924 main.go:141] libmachine: (ha-543552-m02) <domain type='kvm'>
	I0416 16:33:55.545623   20924 main.go:141] libmachine: (ha-543552-m02)   <name>ha-543552-m02</name>
	I0416 16:33:55.545629   20924 main.go:141] libmachine: (ha-543552-m02)   <memory unit='MiB'>2200</memory>
	I0416 16:33:55.545634   20924 main.go:141] libmachine: (ha-543552-m02)   <vcpu>2</vcpu>
	I0416 16:33:55.545642   20924 main.go:141] libmachine: (ha-543552-m02)   <features>
	I0416 16:33:55.545677   20924 main.go:141] libmachine: (ha-543552-m02)     <acpi/>
	I0416 16:33:55.545705   20924 main.go:141] libmachine: (ha-543552-m02)     <apic/>
	I0416 16:33:55.545716   20924 main.go:141] libmachine: (ha-543552-m02)     <pae/>
	I0416 16:33:55.545728   20924 main.go:141] libmachine: (ha-543552-m02)     
	I0416 16:33:55.545738   20924 main.go:141] libmachine: (ha-543552-m02)   </features>
	I0416 16:33:55.545770   20924 main.go:141] libmachine: (ha-543552-m02)   <cpu mode='host-passthrough'>
	I0416 16:33:55.545788   20924 main.go:141] libmachine: (ha-543552-m02)   
	I0416 16:33:55.545798   20924 main.go:141] libmachine: (ha-543552-m02)   </cpu>
	I0416 16:33:55.545811   20924 main.go:141] libmachine: (ha-543552-m02)   <os>
	I0416 16:33:55.545824   20924 main.go:141] libmachine: (ha-543552-m02)     <type>hvm</type>
	I0416 16:33:55.545835   20924 main.go:141] libmachine: (ha-543552-m02)     <boot dev='cdrom'/>
	I0416 16:33:55.545849   20924 main.go:141] libmachine: (ha-543552-m02)     <boot dev='hd'/>
	I0416 16:33:55.545867   20924 main.go:141] libmachine: (ha-543552-m02)     <bootmenu enable='no'/>
	I0416 16:33:55.545881   20924 main.go:141] libmachine: (ha-543552-m02)   </os>
	I0416 16:33:55.545893   20924 main.go:141] libmachine: (ha-543552-m02)   <devices>
	I0416 16:33:55.545909   20924 main.go:141] libmachine: (ha-543552-m02)     <disk type='file' device='cdrom'>
	I0416 16:33:55.545926   20924 main.go:141] libmachine: (ha-543552-m02)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/boot2docker.iso'/>
	I0416 16:33:55.545954   20924 main.go:141] libmachine: (ha-543552-m02)       <target dev='hdc' bus='scsi'/>
	I0416 16:33:55.545976   20924 main.go:141] libmachine: (ha-543552-m02)       <readonly/>
	I0416 16:33:55.545990   20924 main.go:141] libmachine: (ha-543552-m02)     </disk>
	I0416 16:33:55.546003   20924 main.go:141] libmachine: (ha-543552-m02)     <disk type='file' device='disk'>
	I0416 16:33:55.546028   20924 main.go:141] libmachine: (ha-543552-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:33:55.546046   20924 main.go:141] libmachine: (ha-543552-m02)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/ha-543552-m02.rawdisk'/>
	I0416 16:33:55.546057   20924 main.go:141] libmachine: (ha-543552-m02)       <target dev='hda' bus='virtio'/>
	I0416 16:33:55.546062   20924 main.go:141] libmachine: (ha-543552-m02)     </disk>
	I0416 16:33:55.546070   20924 main.go:141] libmachine: (ha-543552-m02)     <interface type='network'>
	I0416 16:33:55.546075   20924 main.go:141] libmachine: (ha-543552-m02)       <source network='mk-ha-543552'/>
	I0416 16:33:55.546084   20924 main.go:141] libmachine: (ha-543552-m02)       <model type='virtio'/>
	I0416 16:33:55.546091   20924 main.go:141] libmachine: (ha-543552-m02)     </interface>
	I0416 16:33:55.546099   20924 main.go:141] libmachine: (ha-543552-m02)     <interface type='network'>
	I0416 16:33:55.546108   20924 main.go:141] libmachine: (ha-543552-m02)       <source network='default'/>
	I0416 16:33:55.546120   20924 main.go:141] libmachine: (ha-543552-m02)       <model type='virtio'/>
	I0416 16:33:55.546128   20924 main.go:141] libmachine: (ha-543552-m02)     </interface>
	I0416 16:33:55.546136   20924 main.go:141] libmachine: (ha-543552-m02)     <serial type='pty'>
	I0416 16:33:55.546153   20924 main.go:141] libmachine: (ha-543552-m02)       <target port='0'/>
	I0416 16:33:55.546161   20924 main.go:141] libmachine: (ha-543552-m02)     </serial>
	I0416 16:33:55.546168   20924 main.go:141] libmachine: (ha-543552-m02)     <console type='pty'>
	I0416 16:33:55.546174   20924 main.go:141] libmachine: (ha-543552-m02)       <target type='serial' port='0'/>
	I0416 16:33:55.546181   20924 main.go:141] libmachine: (ha-543552-m02)     </console>
	I0416 16:33:55.546187   20924 main.go:141] libmachine: (ha-543552-m02)     <rng model='virtio'>
	I0416 16:33:55.546203   20924 main.go:141] libmachine: (ha-543552-m02)       <backend model='random'>/dev/random</backend>
	I0416 16:33:55.546217   20924 main.go:141] libmachine: (ha-543552-m02)     </rng>
	I0416 16:33:55.546225   20924 main.go:141] libmachine: (ha-543552-m02)     
	I0416 16:33:55.546229   20924 main.go:141] libmachine: (ha-543552-m02)     
	I0416 16:33:55.546235   20924 main.go:141] libmachine: (ha-543552-m02)   </devices>
	I0416 16:33:55.546239   20924 main.go:141] libmachine: (ha-543552-m02) </domain>
	I0416 16:33:55.546250   20924 main.go:141] libmachine: (ha-543552-m02) 
	I0416 16:33:55.553129   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:fb:d7:4e in network default
	I0416 16:33:55.553850   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:55.553863   20924 main.go:141] libmachine: (ha-543552-m02) Ensuring networks are active...
	I0416 16:33:55.554641   20924 main.go:141] libmachine: (ha-543552-m02) Ensuring network default is active
	I0416 16:33:55.554973   20924 main.go:141] libmachine: (ha-543552-m02) Ensuring network mk-ha-543552 is active
	I0416 16:33:55.555440   20924 main.go:141] libmachine: (ha-543552-m02) Getting domain xml...
	I0416 16:33:55.556163   20924 main.go:141] libmachine: (ha-543552-m02) Creating domain...
	I0416 16:33:56.805900   20924 main.go:141] libmachine: (ha-543552-m02) Waiting to get IP...
	I0416 16:33:56.806770   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:56.807175   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:56.807230   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:56.807166   21317 retry.go:31] will retry after 290.248104ms: waiting for machine to come up
	I0416 16:33:57.098662   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:57.099157   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:57.099186   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:57.099114   21317 retry.go:31] will retry after 330.769379ms: waiting for machine to come up
	I0416 16:33:57.431847   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:57.432297   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:57.432322   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:57.432258   21317 retry.go:31] will retry after 366.242177ms: waiting for machine to come up
	I0416 16:33:57.799714   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:57.800180   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:57.800206   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:57.800142   21317 retry.go:31] will retry after 455.971916ms: waiting for machine to come up
	I0416 16:33:58.258614   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:58.259169   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:58.259213   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:58.259131   21317 retry.go:31] will retry after 490.210716ms: waiting for machine to come up
	I0416 16:33:58.750814   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:58.751413   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:58.751442   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:58.751356   21317 retry.go:31] will retry after 828.445668ms: waiting for machine to come up
	I0416 16:33:59.581783   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:59.582201   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:59.582230   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:59.582155   21317 retry.go:31] will retry after 798.686835ms: waiting for machine to come up
	I0416 16:34:00.382679   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:00.383142   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:00.383172   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:00.383042   21317 retry.go:31] will retry after 1.326441349s: waiting for machine to come up
	I0416 16:34:01.711538   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:01.712102   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:01.712126   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:01.712057   21317 retry.go:31] will retry after 1.802384547s: waiting for machine to come up
	I0416 16:34:03.516941   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:03.517457   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:03.517489   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:03.517417   21317 retry.go:31] will retry after 1.596867743s: waiting for machine to come up
	I0416 16:34:05.116164   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:05.116604   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:05.116653   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:05.116537   21317 retry.go:31] will retry after 2.252441268s: waiting for machine to come up
	I0416 16:34:07.371108   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:07.371563   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:07.371580   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:07.371529   21317 retry.go:31] will retry after 2.942887808s: waiting for machine to come up
	I0416 16:34:10.316223   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:10.316554   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:10.316592   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:10.316521   21317 retry.go:31] will retry after 3.833251525s: waiting for machine to come up
	I0416 16:34:14.153828   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:14.154276   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:14.154303   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:14.154231   21317 retry.go:31] will retry after 4.748429365s: waiting for machine to come up
	I0416 16:34:18.903815   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:18.904267   20924 main.go:141] libmachine: (ha-543552-m02) Found IP for machine: 192.168.39.80
	I0416 16:34:18.904298   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has current primary IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:18.904308   20924 main.go:141] libmachine: (ha-543552-m02) Reserving static IP address...
	I0416 16:34:18.904758   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find host DHCP lease matching {name: "ha-543552-m02", mac: "52:54:00:bd:b0:d7", ip: "192.168.39.80"} in network mk-ha-543552
	I0416 16:34:18.975022   20924 main.go:141] libmachine: (ha-543552-m02) Reserved static IP address: 192.168.39.80
	I0416 16:34:18.975054   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Getting to WaitForSSH function...
	I0416 16:34:18.975061   20924 main.go:141] libmachine: (ha-543552-m02) Waiting for SSH to be available...
	I0416 16:34:18.977405   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:18.977775   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552
	I0416 16:34:18.977801   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find defined IP address of network mk-ha-543552 interface with MAC address 52:54:00:bd:b0:d7
	I0416 16:34:18.977907   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH client type: external
	I0416 16:34:18.977935   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa (-rw-------)
	I0416 16:34:18.977975   20924 main.go:141] libmachine: (ha-543552-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:34:18.977993   20924 main.go:141] libmachine: (ha-543552-m02) DBG | About to run SSH command:
	I0416 16:34:18.978027   20924 main.go:141] libmachine: (ha-543552-m02) DBG | exit 0
	I0416 16:34:18.981475   20924 main.go:141] libmachine: (ha-543552-m02) DBG | SSH cmd err, output: exit status 255: 
	I0416 16:34:18.981493   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0416 16:34:18.981502   20924 main.go:141] libmachine: (ha-543552-m02) DBG | command : exit 0
	I0416 16:34:18.981509   20924 main.go:141] libmachine: (ha-543552-m02) DBG | err     : exit status 255
	I0416 16:34:18.981520   20924 main.go:141] libmachine: (ha-543552-m02) DBG | output  : 
	I0416 16:34:21.983020   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Getting to WaitForSSH function...
	I0416 16:34:21.985687   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:21.986122   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:21.986171   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:21.986264   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH client type: external
	I0416 16:34:21.986281   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa (-rw-------)
	I0416 16:34:21.986334   20924 main.go:141] libmachine: (ha-543552-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:34:21.986377   20924 main.go:141] libmachine: (ha-543552-m02) DBG | About to run SSH command:
	I0416 16:34:21.986387   20924 main.go:141] libmachine: (ha-543552-m02) DBG | exit 0
	I0416 16:34:22.112817   20924 main.go:141] libmachine: (ha-543552-m02) DBG | SSH cmd err, output: <nil>: 
	I0416 16:34:22.113141   20924 main.go:141] libmachine: (ha-543552-m02) KVM machine creation complete!
	I0416 16:34:22.113447   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetConfigRaw
	I0416 16:34:22.113975   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:22.114193   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:22.114344   20924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:34:22.114360   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:34:22.115545   20924 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:34:22.115561   20924 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:34:22.115566   20924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:34:22.115573   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.117775   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.118089   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.118117   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.118217   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.118374   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.118525   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.118662   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.118837   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.119047   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.119060   20924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:34:22.220170   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:34:22.220194   20924 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:34:22.220202   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.222897   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.223254   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.223276   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.223480   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.223679   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.223908   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.224056   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.224273   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.224475   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.224488   20924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:34:22.329931   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:34:22.329984   20924 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:34:22.329990   20924 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:34:22.329998   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:34:22.330226   20924 buildroot.go:166] provisioning hostname "ha-543552-m02"
	I0416 16:34:22.330248   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:34:22.330429   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.332660   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.332974   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.332998   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.333149   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.333316   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.333441   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.333548   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.333677   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.333879   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.333892   20924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552-m02 && echo "ha-543552-m02" | sudo tee /etc/hostname
	I0416 16:34:22.456829   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552-m02
	
	I0416 16:34:22.456878   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.459435   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.459829   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.459874   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.460003   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.460184   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.460334   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.460453   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.460590   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.460820   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.460856   20924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:34:22.575836   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:34:22.575867   20924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:34:22.575897   20924 buildroot.go:174] setting up certificates
	I0416 16:34:22.575907   20924 provision.go:84] configureAuth start
	I0416 16:34:22.575915   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:34:22.576177   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:22.578790   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.579083   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.579112   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.579193   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.581334   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.581677   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.581706   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.581853   20924 provision.go:143] copyHostCerts
	I0416 16:34:22.581893   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:34:22.581925   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:34:22.581935   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:34:22.581995   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:34:22.582060   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:34:22.582077   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:34:22.582083   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:34:22.582108   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:34:22.582146   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:34:22.582162   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:34:22.582168   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:34:22.582187   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:34:22.582228   20924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552-m02 san=[127.0.0.1 192.168.39.80 ha-543552-m02 localhost minikube]
	I0416 16:34:22.771886   20924 provision.go:177] copyRemoteCerts
	I0416 16:34:22.771948   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:34:22.771968   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.774250   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.774576   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.774610   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.774793   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.774976   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.775087   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.775262   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:22.855612   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:34:22.855681   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:34:22.885615   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:34:22.885673   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:34:22.910435   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:34:22.910504   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:34:22.937196   20924 provision.go:87] duration metric: took 361.278852ms to configureAuth
	I0416 16:34:22.937221   20924 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:34:22.937426   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:34:22.937514   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.939839   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.940220   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.940258   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.940424   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.940606   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.940789   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.940945   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.941165   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.941376   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.941401   20924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:34:23.226298   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:34:23.226327   20924 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:34:23.226337   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetURL
	I0416 16:34:23.227535   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using libvirt version 6000000
	I0416 16:34:23.229393   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.229765   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.229793   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.229947   20924 main.go:141] libmachine: Docker is up and running!
	I0416 16:34:23.229961   20924 main.go:141] libmachine: Reticulating splines...
	I0416 16:34:23.229967   20924 client.go:171] duration metric: took 28.076240598s to LocalClient.Create
	I0416 16:34:23.229989   20924 start.go:167] duration metric: took 28.076300549s to libmachine.API.Create "ha-543552"
	I0416 16:34:23.229998   20924 start.go:293] postStartSetup for "ha-543552-m02" (driver="kvm2")
	I0416 16:34:23.230009   20924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:34:23.230025   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.230257   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:34:23.230277   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:23.232074   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.232372   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.232401   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.232506   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.232690   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.232805   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.232940   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:23.318833   20924 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:34:23.323995   20924 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:34:23.324019   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:34:23.324090   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:34:23.324176   20924 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:34:23.324187   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:34:23.324288   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:34:23.335957   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:34:23.363475   20924 start.go:296] duration metric: took 133.465137ms for postStartSetup
	I0416 16:34:23.363523   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetConfigRaw
	I0416 16:34:23.364079   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:23.366654   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.366969   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.367002   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.367189   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:34:23.367411   20924 start.go:128] duration metric: took 28.231042081s to createHost
	I0416 16:34:23.367438   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:23.369594   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.369917   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.369945   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.370071   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.370238   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.370374   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.370482   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.370661   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:23.370814   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:23.370824   20924 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:34:23.474504   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285263.449803813
	
	I0416 16:34:23.474530   20924 fix.go:216] guest clock: 1713285263.449803813
	I0416 16:34:23.474540   20924 fix.go:229] Guest: 2024-04-16 16:34:23.449803813 +0000 UTC Remote: 2024-04-16 16:34:23.367426008 +0000 UTC m=+85.602605598 (delta=82.377805ms)
	I0416 16:34:23.474562   20924 fix.go:200] guest clock delta is within tolerance: 82.377805ms
	I0416 16:34:23.474570   20924 start.go:83] releasing machines lock for "ha-543552-m02", held for 28.33828969s
	I0416 16:34:23.474597   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.474858   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:23.477502   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.477898   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.477930   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.480172   20924 out.go:177] * Found network options:
	I0416 16:34:23.481476   20924 out.go:177]   - NO_PROXY=192.168.39.97
	W0416 16:34:23.482700   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:34:23.482737   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.483285   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.483471   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.483533   20924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:34:23.483585   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	W0416 16:34:23.483658   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:34:23.483729   20924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:34:23.483751   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:23.485877   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486138   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486273   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.486298   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486404   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.486528   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.486553   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486591   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.486738   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.486806   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.486883   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:23.486982   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.487114   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.487236   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:23.729972   20924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:34:23.737219   20924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:34:23.737297   20924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:34:23.754231   20924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:34:23.754256   20924 start.go:494] detecting cgroup driver to use...
	I0416 16:34:23.754321   20924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:34:23.771935   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:34:23.786287   20924 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:34:23.786346   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:34:23.800482   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:34:23.814464   20924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:34:23.928514   20924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:34:24.097127   20924 docker.go:233] disabling docker service ...
	I0416 16:34:24.097199   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:34:24.113295   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:34:24.128010   20924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:34:24.274991   20924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:34:24.416672   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:34:24.432104   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:34:24.453292   20924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:34:24.453343   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.464454   20924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:34:24.464520   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.475537   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.486405   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.497553   20924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:34:24.508506   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.519217   20924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.537820   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.549220   20924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:34:24.560485   20924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:34:24.560526   20924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:34:24.575768   20924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:34:24.585837   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:34:24.715640   20924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:34:24.878193   20924 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:34:24.878290   20924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:34:24.883908   20924 start.go:562] Will wait 60s for crictl version
	I0416 16:34:24.883955   20924 ssh_runner.go:195] Run: which crictl
	I0416 16:34:24.888726   20924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:34:24.929464   20924 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:34:24.929557   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:34:24.962334   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:34:24.999017   20924 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:34:25.000480   20924 out.go:177]   - env NO_PROXY=192.168.39.97
	I0416 16:34:25.001899   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:25.004680   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:25.005077   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:25.005106   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:25.005292   20924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:34:25.009855   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:34:25.024278   20924 mustload.go:65] Loading cluster: ha-543552
	I0416 16:34:25.024447   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:34:25.024689   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:34:25.024713   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:34:25.039191   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0416 16:34:25.039565   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:34:25.039997   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:34:25.040018   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:34:25.040318   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:34:25.040481   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:34:25.042108   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:34:25.042384   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:34:25.042408   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:34:25.056113   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0416 16:34:25.056591   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:34:25.057140   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:34:25.057164   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:34:25.057471   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:34:25.057696   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:34:25.057864   20924 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.80
	I0416 16:34:25.057874   20924 certs.go:194] generating shared ca certs ...
	I0416 16:34:25.057893   20924 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:34:25.058007   20924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:34:25.058050   20924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:34:25.058059   20924 certs.go:256] generating profile certs ...
	I0416 16:34:25.058131   20924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:34:25.058153   20924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c
	I0416 16:34:25.058166   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.80 192.168.39.254]
	I0416 16:34:25.130651   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c ...
	I0416 16:34:25.130681   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c: {Name:mk66a4e33abe84b39a7f3396faacd5c2278877b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:34:25.130868   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c ...
	I0416 16:34:25.130886   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c: {Name:mk2fdedebc09799117b95168bd2138cb3e367cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:34:25.130991   20924 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:34:25.131123   20924 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:34:25.131250   20924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:34:25.131265   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:34:25.131276   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:34:25.131290   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:34:25.131303   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:34:25.131315   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:34:25.131327   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:34:25.131339   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:34:25.131350   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:34:25.131391   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:34:25.131417   20924 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:34:25.131427   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:34:25.131449   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:34:25.131470   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:34:25.131495   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:34:25.131530   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:34:25.131561   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.131575   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.131587   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.131615   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:34:25.134758   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:25.135101   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:34:25.135128   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:25.135262   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:34:25.135460   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:34:25.135626   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:34:25.135765   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:34:25.213221   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0416 16:34:25.218720   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0416 16:34:25.233107   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0416 16:34:25.244022   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0416 16:34:25.255609   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0416 16:34:25.261372   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0416 16:34:25.279105   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0416 16:34:25.284988   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0416 16:34:25.296237   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0416 16:34:25.301027   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0416 16:34:25.312044   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0416 16:34:25.317034   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0416 16:34:25.328728   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:34:25.359168   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:34:25.388786   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:34:25.419258   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:34:25.445596   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 16:34:25.473023   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:34:25.499377   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:34:25.525099   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:34:25.552145   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:34:25.578152   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:34:25.608274   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:34:25.635422   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0416 16:34:25.654110   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0416 16:34:25.672130   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0416 16:34:25.690269   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0416 16:34:25.708055   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0416 16:34:25.725889   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0416 16:34:25.743861   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0416 16:34:25.761978   20924 ssh_runner.go:195] Run: openssl version
	I0416 16:34:25.767681   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:34:25.778660   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.783228   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.783267   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.789477   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:34:25.800954   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:34:25.813778   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.818775   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.818820   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.824660   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:34:25.836621   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:34:25.848559   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.853482   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.853524   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.859490   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:34:25.870940   20924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:34:25.875499   20924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:34:25.875543   20924 kubeadm.go:928] updating node {m02 192.168.39.80 8443 v1.29.3 crio true true} ...
	I0416 16:34:25.875624   20924 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:34:25.875658   20924 kube-vip.go:111] generating kube-vip config ...
	I0416 16:34:25.875694   20924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:34:25.892959   20924 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:34:25.893023   20924 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:34:25.893063   20924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:34:25.903902   20924 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 16:34:25.903968   20924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 16:34:25.914683   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 16:34:25.914718   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:34:25.914794   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:34:25.914815   20924 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0416 16:34:25.914826   20924 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0416 16:34:25.921212   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 16:34:25.921234   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 16:34:26.899745   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:34:26.915916   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:34:26.916010   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:34:26.920725   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 16:34:26.920760   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 16:34:29.403380   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:34:29.403452   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:34:29.408991   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 16:34:29.409020   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 16:34:29.668153   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0416 16:34:29.679072   20924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0416 16:34:29.696668   20924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:34:29.715587   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 16:34:29.733793   20924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:34:29.738240   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:34:29.752017   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:34:29.893269   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:34:29.913168   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:34:29.913662   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:34:29.913718   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:34:29.928180   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0416 16:34:29.928609   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:34:29.929070   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:34:29.929093   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:34:29.929372   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:34:29.929585   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:34:29.929778   20924 start.go:316] joinCluster: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:34:29.929901   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 16:34:29.929922   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:34:29.932933   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:29.933281   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:34:29.933305   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:29.933465   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:34:29.933627   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:34:29.933759   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:34:29.933878   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:34:30.095359   20924 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:34:30.095412   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jubk2w.77e69lakqh5t8imx --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m02 --control-plane --apiserver-advertise-address=192.168.39.80 --apiserver-bind-port=8443"
	I0416 16:34:54.225301   20924 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jubk2w.77e69lakqh5t8imx --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m02 --control-plane --apiserver-advertise-address=192.168.39.80 --apiserver-bind-port=8443": (24.129863771s)
	I0416 16:34:54.225336   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 16:34:54.783553   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-543552-m02 minikube.k8s.io/updated_at=2024_04_16T16_34_54_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-543552 minikube.k8s.io/primary=false
	I0416 16:34:54.915785   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-543552-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0416 16:34:55.053668   20924 start.go:318] duration metric: took 25.123888154s to joinCluster
	I0416 16:34:55.053747   20924 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:34:55.055581   20924 out.go:177] * Verifying Kubernetes components...
	I0416 16:34:55.054049   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:34:55.056966   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:34:55.321939   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:34:55.359456   20924 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:34:55.359860   20924 kapi.go:59] client config for ha-543552: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0416 16:34:55.359945   20924 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0416 16:34:55.360202   20924 node_ready.go:35] waiting up to 6m0s for node "ha-543552-m02" to be "Ready" ...
	I0416 16:34:55.360301   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:55.360309   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:55.360319   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:55.360328   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:55.371769   20924 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0416 16:34:55.860690   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:55.860710   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:55.860718   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:55.860724   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:55.872958   20924 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0416 16:34:56.360788   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:56.360809   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:56.360817   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:56.360822   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:56.364800   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:56.861251   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:56.861273   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:56.861281   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:56.861286   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:56.866167   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:34:57.360477   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:57.360499   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:57.360507   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:57.360511   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:57.364611   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:34:57.365440   20924 node_ready.go:53] node "ha-543552-m02" has status "Ready":"False"
	I0416 16:34:57.860989   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:57.861011   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:57.861020   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:57.861024   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:57.863836   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.360808   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:58.360832   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.360855   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.360863   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.364722   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.365505   20924 node_ready.go:49] node "ha-543552-m02" has status "Ready":"True"
	I0416 16:34:58.365519   20924 node_ready.go:38] duration metric: took 3.00529425s for node "ha-543552-m02" to be "Ready" ...
	I0416 16:34:58.365527   20924 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:34:58.365586   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:34:58.365596   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.365602   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.365606   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.371176   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:34:58.378037   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.378119   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-k7bn7
	I0416 16:34:58.378128   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.378135   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.378139   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.381456   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.382076   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:34:58.382090   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.382097   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.382101   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.384679   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.385141   20924 pod_ready.go:92] pod "coredns-76f75df574-k7bn7" in "kube-system" namespace has status "Ready":"True"
	I0416 16:34:58.385156   20924 pod_ready.go:81] duration metric: took 7.099248ms for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.385163   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.385202   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-l9zck
	I0416 16:34:58.385210   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.385216   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.385220   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.387884   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.388813   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:34:58.388830   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.388857   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.388865   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.391307   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.391937   20924 pod_ready.go:92] pod "coredns-76f75df574-l9zck" in "kube-system" namespace has status "Ready":"True"
	I0416 16:34:58.391952   20924 pod_ready.go:81] duration metric: took 6.783007ms for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.391962   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.392016   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552
	I0416 16:34:58.392027   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.392036   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.392044   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.394388   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.395127   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:34:58.395140   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.395147   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.395151   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.397646   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.398135   20924 pod_ready.go:92] pod "etcd-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:34:58.398150   20924 pod_ready.go:81] duration metric: took 6.181338ms for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.398160   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.398213   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:58.398225   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.398235   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.398241   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.400559   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.401292   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:58.401305   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.401313   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.401317   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.404804   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.898417   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:58.898442   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.898453   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.898460   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.901864   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.902909   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:58.902921   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.902929   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.902933   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.905783   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:59.398680   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:59.398708   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.398720   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.398727   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.402509   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:59.403286   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:59.403307   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.403318   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.403324   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.406316   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:59.898348   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:59.898370   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.898380   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.898386   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.903211   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:34:59.903838   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:59.903852   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.903860   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.903865   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.907217   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:00.398582   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:00.398603   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.398610   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.398615   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.402093   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:00.403072   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:00.403087   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.403095   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.403099   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.405948   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:00.406463   20924 pod_ready.go:102] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"False"
	I0416 16:35:00.898723   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:00.898743   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.898757   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.898765   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.904026   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:35:00.904960   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:00.904973   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.904981   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.904986   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.907204   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:01.399369   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:01.399390   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.399399   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.399403   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.403490   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:01.404599   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:01.404614   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.404619   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.404622   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.407503   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:01.898558   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:01.898578   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.898586   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.898590   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.901958   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:01.903193   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:01.903209   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.903219   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.903227   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.906563   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.398491   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:02.398513   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.398521   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.398525   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.402238   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.403169   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:02.403186   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.403196   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.403201   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.406452   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.407304   20924 pod_ready.go:102] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"False"
	I0416 16:35:02.898980   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:02.899004   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.899014   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.899019   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.902452   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.903310   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:02.903327   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.903338   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.903343   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.905915   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.398357   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:03.398379   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.398387   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.398391   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.402364   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.403190   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:03.403204   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.403210   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.403214   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.405992   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.899146   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:03.899166   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.899172   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.899176   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.905963   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:35:03.907540   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:03.907558   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.907569   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.907576   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.911163   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.911737   20924 pod_ready.go:92] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.911753   20924 pod_ready.go:81] duration metric: took 5.513586854s for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.911766   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.911811   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552
	I0416 16:35:03.911819   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.911827   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.911830   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.914804   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.915395   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:03.915408   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.915414   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.915419   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.919024   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.919560   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.919575   20924 pod_ready.go:81] duration metric: took 7.803617ms for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.919582   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.919623   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m02
	I0416 16:35:03.919633   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.919639   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.919644   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.922948   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.923571   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:03.923584   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.923593   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.923600   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.926632   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.927334   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.927350   20924 pod_ready.go:81] duration metric: took 7.76232ms for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.927359   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.927399   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552
	I0416 16:35:03.927407   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.927414   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.927418   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.930348   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.961202   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:03.961236   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.961244   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.961249   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.964893   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.965508   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.965529   20924 pod_ready.go:81] duration metric: took 38.160856ms for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.965541   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.160906   20924 request.go:629] Waited for 195.30531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:35:04.160974   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:35:04.160979   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.160987   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.160992   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.164563   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:04.361532   20924 request.go:629] Waited for 196.076002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.361589   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.361594   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.361605   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.361610   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.365413   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:04.366011   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:04.366031   20924 pod_ready.go:81] duration metric: took 400.48186ms for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.366043   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.561178   20924 request.go:629] Waited for 195.070796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:35:04.561249   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:35:04.561254   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.561261   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.561267   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.565398   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:04.761609   20924 request.go:629] Waited for 195.41036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.761692   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.761711   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.761722   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.761728   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.766577   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:04.767315   20924 pod_ready.go:92] pod "kube-proxy-2vkts" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:04.767332   20924 pod_ready.go:81] duration metric: took 401.282798ms for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.767341   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.961840   20924 request.go:629] Waited for 194.444607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:35:04.961905   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:35:04.961910   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.961916   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.961920   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.964936   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.161673   20924 request.go:629] Waited for 195.78813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.161739   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.161745   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.161753   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.161759   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.165689   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.166550   20924 pod_ready.go:92] pod "kube-proxy-c9lhc" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:05.166573   20924 pod_ready.go:81] duration metric: took 399.225298ms for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.166585   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.361662   20924 request.go:629] Waited for 195.004558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:35:05.361719   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:35:05.361724   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.361732   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.361737   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.365411   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.560782   20924 request.go:629] Waited for 194.277771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.560854   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.560873   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.560881   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.560885   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.564496   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.565425   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:05.565443   20924 pod_ready.go:81] duration metric: took 398.851526ms for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.565452   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.761506   20924 request.go:629] Waited for 195.996627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:35:05.761591   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:35:05.761604   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.761615   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.761623   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.765648   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:05.961805   20924 request.go:629] Waited for 195.376797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:05.961869   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:05.961889   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.961904   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.961910   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.965893   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.966659   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:05.966685   20924 pod_ready.go:81] duration metric: took 401.226092ms for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.966699   20924 pod_ready.go:38] duration metric: took 7.601162362s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:35:05.966714   20924 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:35:05.966778   20924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:35:05.984626   20924 api_server.go:72] duration metric: took 10.930847996s to wait for apiserver process to appear ...
	I0416 16:35:05.984650   20924 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:35:05.984670   20924 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0416 16:35:05.989259   20924 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0416 16:35:05.989311   20924 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0416 16:35:05.989317   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.989325   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.989335   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.990425   20924 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 16:35:05.990525   20924 api_server.go:141] control plane version: v1.29.3
	I0416 16:35:05.990545   20924 api_server.go:131] duration metric: took 5.888134ms to wait for apiserver health ...
	I0416 16:35:05.990553   20924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:35:06.160915   20924 request.go:629] Waited for 170.294501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.160999   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.161005   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.161012   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.161016   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.167104   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:35:06.173151   20924 system_pods.go:59] 17 kube-system pods found
	I0416 16:35:06.173206   20924 system_pods.go:61] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:35:06.173213   20924 system_pods.go:61] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:35:06.173217   20924 system_pods.go:61] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:35:06.173221   20924 system_pods.go:61] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:35:06.173224   20924 system_pods.go:61] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:35:06.173227   20924 system_pods.go:61] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:35:06.173230   20924 system_pods.go:61] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:35:06.173233   20924 system_pods.go:61] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:35:06.173236   20924 system_pods.go:61] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:35:06.173240   20924 system_pods.go:61] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:35:06.173244   20924 system_pods.go:61] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:35:06.173247   20924 system_pods.go:61] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:35:06.173254   20924 system_pods.go:61] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:35:06.173257   20924 system_pods.go:61] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:35:06.173259   20924 system_pods.go:61] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:35:06.173264   20924 system_pods.go:61] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:35:06.173268   20924 system_pods.go:61] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:35:06.173274   20924 system_pods.go:74] duration metric: took 182.71198ms to wait for pod list to return data ...
	I0416 16:35:06.173289   20924 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:35:06.361743   20924 request.go:629] Waited for 188.371258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:35:06.361797   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:35:06.361802   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.361809   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.361813   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.365591   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:06.365841   20924 default_sa.go:45] found service account: "default"
	I0416 16:35:06.365861   20924 default_sa.go:55] duration metric: took 192.566887ms for default service account to be created ...
	I0416 16:35:06.365868   20924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:35:06.561305   20924 request.go:629] Waited for 195.367623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.561369   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.561374   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.561382   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.561387   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.568778   20924 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 16:35:06.574146   20924 system_pods.go:86] 17 kube-system pods found
	I0416 16:35:06.574172   20924 system_pods.go:89] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:35:06.574177   20924 system_pods.go:89] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:35:06.574182   20924 system_pods.go:89] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:35:06.574186   20924 system_pods.go:89] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:35:06.574189   20924 system_pods.go:89] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:35:06.574193   20924 system_pods.go:89] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:35:06.574200   20924 system_pods.go:89] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:35:06.574205   20924 system_pods.go:89] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:35:06.574209   20924 system_pods.go:89] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:35:06.574213   20924 system_pods.go:89] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:35:06.574217   20924 system_pods.go:89] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:35:06.574221   20924 system_pods.go:89] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:35:06.574224   20924 system_pods.go:89] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:35:06.574228   20924 system_pods.go:89] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:35:06.574232   20924 system_pods.go:89] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:35:06.574236   20924 system_pods.go:89] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:35:06.574239   20924 system_pods.go:89] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:35:06.574245   20924 system_pods.go:126] duration metric: took 208.372151ms to wait for k8s-apps to be running ...
	I0416 16:35:06.574257   20924 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:35:06.574302   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:35:06.591603   20924 system_svc.go:56] duration metric: took 17.33744ms WaitForService to wait for kubelet
	I0416 16:35:06.591632   20924 kubeadm.go:576] duration metric: took 11.537857616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:35:06.591652   20924 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:35:06.760823   20924 request.go:629] Waited for 169.101079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0416 16:35:06.760909   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0416 16:35:06.760916   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.760927   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.760937   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.764601   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:06.765687   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:35:06.765709   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:35:06.765720   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:35:06.765723   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:35:06.765727   20924 node_conditions.go:105] duration metric: took 174.071725ms to run NodePressure ...
	I0416 16:35:06.765742   20924 start.go:240] waiting for startup goroutines ...
	I0416 16:35:06.765765   20924 start.go:254] writing updated cluster config ...
	I0416 16:35:06.767826   20924 out.go:177] 
	I0416 16:35:06.769387   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:35:06.769504   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:35:06.771152   20924 out.go:177] * Starting "ha-543552-m03" control-plane node in "ha-543552" cluster
	I0416 16:35:06.772343   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:35:06.772363   20924 cache.go:56] Caching tarball of preloaded images
	I0416 16:35:06.772438   20924 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:35:06.772449   20924 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:35:06.772533   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:35:06.772687   20924 start.go:360] acquireMachinesLock for ha-543552-m03: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:35:06.772724   20924 start.go:364] duration metric: took 20.458µs to acquireMachinesLock for "ha-543552-m03"
	I0416 16:35:06.772745   20924 start.go:93] Provisioning new machine with config: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:35:06.772833   20924 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0416 16:35:06.774391   20924 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:35:06.774473   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:06.774516   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:06.789258   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0416 16:35:06.789719   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:06.790194   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:06.790212   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:06.790510   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:06.790729   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:06.790882   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:06.791021   20924 start.go:159] libmachine.API.Create for "ha-543552" (driver="kvm2")
	I0416 16:35:06.791052   20924 client.go:168] LocalClient.Create starting
	I0416 16:35:06.791084   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 16:35:06.791132   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:35:06.791152   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:35:06.791210   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 16:35:06.791237   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:35:06.791254   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:35:06.791281   20924 main.go:141] libmachine: Running pre-create checks...
	I0416 16:35:06.791292   20924 main.go:141] libmachine: (ha-543552-m03) Calling .PreCreateCheck
	I0416 16:35:06.791451   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetConfigRaw
	I0416 16:35:06.791774   20924 main.go:141] libmachine: Creating machine...
	I0416 16:35:06.791788   20924 main.go:141] libmachine: (ha-543552-m03) Calling .Create
	I0416 16:35:06.791910   20924 main.go:141] libmachine: (ha-543552-m03) Creating KVM machine...
	I0416 16:35:06.793102   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found existing default KVM network
	I0416 16:35:06.793243   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found existing private KVM network mk-ha-543552
	I0416 16:35:06.793408   20924 main.go:141] libmachine: (ha-543552-m03) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03 ...
	I0416 16:35:06.793436   20924 main.go:141] libmachine: (ha-543552-m03) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:35:06.793470   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:06.793372   21709 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:35:06.793549   20924 main.go:141] libmachine: (ha-543552-m03) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:35:07.001488   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:07.001360   21709 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa...
	I0416 16:35:07.314320   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:07.314215   21709 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/ha-543552-m03.rawdisk...
	I0416 16:35:07.314362   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Writing magic tar header
	I0416 16:35:07.314374   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Writing SSH key tar header
	I0416 16:35:07.314382   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:07.314323   21709 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03 ...
	I0416 16:35:07.314441   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03
	I0416 16:35:07.314459   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03 (perms=drwx------)
	I0416 16:35:07.314466   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:35:07.314502   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 16:35:07.314532   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:35:07.314544   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 16:35:07.314557   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 16:35:07.314566   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:35:07.314576   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 16:35:07.314592   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:35:07.314603   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:35:07.314613   20924 main.go:141] libmachine: (ha-543552-m03) Creating domain...
	I0416 16:35:07.314622   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:35:07.314631   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home
	I0416 16:35:07.314642   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Skipping /home - not owner
	I0416 16:35:07.315483   20924 main.go:141] libmachine: (ha-543552-m03) define libvirt domain using xml: 
	I0416 16:35:07.315505   20924 main.go:141] libmachine: (ha-543552-m03) <domain type='kvm'>
	I0416 16:35:07.315515   20924 main.go:141] libmachine: (ha-543552-m03)   <name>ha-543552-m03</name>
	I0416 16:35:07.315530   20924 main.go:141] libmachine: (ha-543552-m03)   <memory unit='MiB'>2200</memory>
	I0416 16:35:07.315540   20924 main.go:141] libmachine: (ha-543552-m03)   <vcpu>2</vcpu>
	I0416 16:35:07.315551   20924 main.go:141] libmachine: (ha-543552-m03)   <features>
	I0416 16:35:07.315563   20924 main.go:141] libmachine: (ha-543552-m03)     <acpi/>
	I0416 16:35:07.315573   20924 main.go:141] libmachine: (ha-543552-m03)     <apic/>
	I0416 16:35:07.315586   20924 main.go:141] libmachine: (ha-543552-m03)     <pae/>
	I0416 16:35:07.315596   20924 main.go:141] libmachine: (ha-543552-m03)     
	I0416 16:35:07.315605   20924 main.go:141] libmachine: (ha-543552-m03)   </features>
	I0416 16:35:07.315619   20924 main.go:141] libmachine: (ha-543552-m03)   <cpu mode='host-passthrough'>
	I0416 16:35:07.315626   20924 main.go:141] libmachine: (ha-543552-m03)   
	I0416 16:35:07.315635   20924 main.go:141] libmachine: (ha-543552-m03)   </cpu>
	I0416 16:35:07.315643   20924 main.go:141] libmachine: (ha-543552-m03)   <os>
	I0416 16:35:07.315655   20924 main.go:141] libmachine: (ha-543552-m03)     <type>hvm</type>
	I0416 16:35:07.315667   20924 main.go:141] libmachine: (ha-543552-m03)     <boot dev='cdrom'/>
	I0416 16:35:07.315676   20924 main.go:141] libmachine: (ha-543552-m03)     <boot dev='hd'/>
	I0416 16:35:07.315692   20924 main.go:141] libmachine: (ha-543552-m03)     <bootmenu enable='no'/>
	I0416 16:35:07.315701   20924 main.go:141] libmachine: (ha-543552-m03)   </os>
	I0416 16:35:07.315725   20924 main.go:141] libmachine: (ha-543552-m03)   <devices>
	I0416 16:35:07.315744   20924 main.go:141] libmachine: (ha-543552-m03)     <disk type='file' device='cdrom'>
	I0416 16:35:07.315776   20924 main.go:141] libmachine: (ha-543552-m03)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/boot2docker.iso'/>
	I0416 16:35:07.315799   20924 main.go:141] libmachine: (ha-543552-m03)       <target dev='hdc' bus='scsi'/>
	I0416 16:35:07.315815   20924 main.go:141] libmachine: (ha-543552-m03)       <readonly/>
	I0416 16:35:07.315833   20924 main.go:141] libmachine: (ha-543552-m03)     </disk>
	I0416 16:35:07.315851   20924 main.go:141] libmachine: (ha-543552-m03)     <disk type='file' device='disk'>
	I0416 16:35:07.315865   20924 main.go:141] libmachine: (ha-543552-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:35:07.315882   20924 main.go:141] libmachine: (ha-543552-m03)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/ha-543552-m03.rawdisk'/>
	I0416 16:35:07.315893   20924 main.go:141] libmachine: (ha-543552-m03)       <target dev='hda' bus='virtio'/>
	I0416 16:35:07.315904   20924 main.go:141] libmachine: (ha-543552-m03)     </disk>
	I0416 16:35:07.315917   20924 main.go:141] libmachine: (ha-543552-m03)     <interface type='network'>
	I0416 16:35:07.315929   20924 main.go:141] libmachine: (ha-543552-m03)       <source network='mk-ha-543552'/>
	I0416 16:35:07.315940   20924 main.go:141] libmachine: (ha-543552-m03)       <model type='virtio'/>
	I0416 16:35:07.315948   20924 main.go:141] libmachine: (ha-543552-m03)     </interface>
	I0416 16:35:07.315959   20924 main.go:141] libmachine: (ha-543552-m03)     <interface type='network'>
	I0416 16:35:07.315971   20924 main.go:141] libmachine: (ha-543552-m03)       <source network='default'/>
	I0416 16:35:07.315982   20924 main.go:141] libmachine: (ha-543552-m03)       <model type='virtio'/>
	I0416 16:35:07.315998   20924 main.go:141] libmachine: (ha-543552-m03)     </interface>
	I0416 16:35:07.316016   20924 main.go:141] libmachine: (ha-543552-m03)     <serial type='pty'>
	I0416 16:35:07.316028   20924 main.go:141] libmachine: (ha-543552-m03)       <target port='0'/>
	I0416 16:35:07.316037   20924 main.go:141] libmachine: (ha-543552-m03)     </serial>
	I0416 16:35:07.316050   20924 main.go:141] libmachine: (ha-543552-m03)     <console type='pty'>
	I0416 16:35:07.316063   20924 main.go:141] libmachine: (ha-543552-m03)       <target type='serial' port='0'/>
	I0416 16:35:07.316077   20924 main.go:141] libmachine: (ha-543552-m03)     </console>
	I0416 16:35:07.316092   20924 main.go:141] libmachine: (ha-543552-m03)     <rng model='virtio'>
	I0416 16:35:07.316106   20924 main.go:141] libmachine: (ha-543552-m03)       <backend model='random'>/dev/random</backend>
	I0416 16:35:07.316117   20924 main.go:141] libmachine: (ha-543552-m03)     </rng>
	I0416 16:35:07.316126   20924 main.go:141] libmachine: (ha-543552-m03)     
	I0416 16:35:07.316133   20924 main.go:141] libmachine: (ha-543552-m03)     
	I0416 16:35:07.316149   20924 main.go:141] libmachine: (ha-543552-m03)   </devices>
	I0416 16:35:07.316164   20924 main.go:141] libmachine: (ha-543552-m03) </domain>
	I0416 16:35:07.316179   20924 main.go:141] libmachine: (ha-543552-m03) 
	I0416 16:35:07.322334   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:34:f6:92 in network default
	I0416 16:35:07.322901   20924 main.go:141] libmachine: (ha-543552-m03) Ensuring networks are active...
	I0416 16:35:07.322922   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:07.323595   20924 main.go:141] libmachine: (ha-543552-m03) Ensuring network default is active
	I0416 16:35:07.323918   20924 main.go:141] libmachine: (ha-543552-m03) Ensuring network mk-ha-543552 is active
	I0416 16:35:07.324382   20924 main.go:141] libmachine: (ha-543552-m03) Getting domain xml...
	I0416 16:35:07.325048   20924 main.go:141] libmachine: (ha-543552-m03) Creating domain...
	I0416 16:35:08.531141   20924 main.go:141] libmachine: (ha-543552-m03) Waiting to get IP...
	I0416 16:35:08.531828   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:08.532253   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:08.532281   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:08.532227   21709 retry.go:31] will retry after 294.77499ms: waiting for machine to come up
	I0416 16:35:08.828811   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:08.829251   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:08.829281   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:08.829200   21709 retry.go:31] will retry after 297.816737ms: waiting for machine to come up
	I0416 16:35:09.128910   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:09.129461   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:09.129493   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:09.129418   21709 retry.go:31] will retry after 477.127226ms: waiting for machine to come up
	I0416 16:35:09.607949   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:09.608418   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:09.608442   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:09.608357   21709 retry.go:31] will retry after 456.349369ms: waiting for machine to come up
	I0416 16:35:10.065854   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:10.066365   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:10.066396   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:10.066325   21709 retry.go:31] will retry after 561.879222ms: waiting for machine to come up
	I0416 16:35:10.629994   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:10.630413   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:10.630429   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:10.630371   21709 retry.go:31] will retry after 726.3447ms: waiting for machine to come up
	I0416 16:35:11.357873   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:11.358350   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:11.358380   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:11.358296   21709 retry.go:31] will retry after 797.57283ms: waiting for machine to come up
	I0416 16:35:12.157789   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:12.158306   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:12.158346   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:12.158269   21709 retry.go:31] will retry after 1.434488181s: waiting for machine to come up
	I0416 16:35:13.594778   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:13.595213   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:13.595242   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:13.595160   21709 retry.go:31] will retry after 1.748054995s: waiting for machine to come up
	I0416 16:35:15.346754   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:15.347223   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:15.347251   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:15.347170   21709 retry.go:31] will retry after 1.738692519s: waiting for machine to come up
	I0416 16:35:17.087361   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:17.087832   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:17.087860   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:17.087777   21709 retry.go:31] will retry after 1.747698931s: waiting for machine to come up
	I0416 16:35:18.837831   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:18.838296   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:18.838316   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:18.838267   21709 retry.go:31] will retry after 3.508870725s: waiting for machine to come up
	I0416 16:35:22.349123   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:22.349525   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:22.349557   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:22.349485   21709 retry.go:31] will retry after 3.956653373s: waiting for machine to come up
	I0416 16:35:26.309866   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:26.310253   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:26.310274   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:26.310209   21709 retry.go:31] will retry after 5.115453223s: waiting for machine to come up
	I0416 16:35:31.429812   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.430299   20924 main.go:141] libmachine: (ha-543552-m03) Found IP for machine: 192.168.39.125
	I0416 16:35:31.430322   20924 main.go:141] libmachine: (ha-543552-m03) Reserving static IP address...
	I0416 16:35:31.430338   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has current primary IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.430716   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find host DHCP lease matching {name: "ha-543552-m03", mac: "52:54:00:f9:15:9d", ip: "192.168.39.125"} in network mk-ha-543552
	I0416 16:35:31.501459   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Getting to WaitForSSH function...
	I0416 16:35:31.501495   20924 main.go:141] libmachine: (ha-543552-m03) Reserved static IP address: 192.168.39.125
	I0416 16:35:31.501541   20924 main.go:141] libmachine: (ha-543552-m03) Waiting for SSH to be available...
	I0416 16:35:31.504105   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.504608   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.504638   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.504779   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Using SSH client type: external
	I0416 16:35:31.504808   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa (-rw-------)
	I0416 16:35:31.504854   20924 main.go:141] libmachine: (ha-543552-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:35:31.504868   20924 main.go:141] libmachine: (ha-543552-m03) DBG | About to run SSH command:
	I0416 16:35:31.504879   20924 main.go:141] libmachine: (ha-543552-m03) DBG | exit 0
	I0416 16:35:31.628917   20924 main.go:141] libmachine: (ha-543552-m03) DBG | SSH cmd err, output: <nil>: 
	I0416 16:35:31.629177   20924 main.go:141] libmachine: (ha-543552-m03) KVM machine creation complete!
	I0416 16:35:31.629557   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetConfigRaw
	I0416 16:35:31.630120   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:31.630327   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:31.630485   20924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:35:31.630501   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:35:31.631760   20924 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:35:31.631775   20924 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:35:31.631793   20924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:35:31.631804   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.634109   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.634494   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.634514   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.634686   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.634845   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.635017   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.635163   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.635311   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.635489   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.635506   20924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:35:31.736455   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:35:31.736498   20924 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:35:31.736510   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.739233   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.739547   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.739580   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.739706   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.739908   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.740065   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.740209   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.740338   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.740510   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.740524   20924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:35:31.842072   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:35:31.842154   20924 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:35:31.842165   20924 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:35:31.842172   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:31.842444   20924 buildroot.go:166] provisioning hostname "ha-543552-m03"
	I0416 16:35:31.842474   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:31.842651   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.845282   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.845687   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.845716   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.845873   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.846059   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.846189   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.846334   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.846545   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.846750   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.846769   20924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552-m03 && echo "ha-543552-m03" | sudo tee /etc/hostname
	I0416 16:35:31.968895   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552-m03
	
	I0416 16:35:31.968920   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.971726   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.972138   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.972161   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.972393   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.972542   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.972721   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.972885   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.973036   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.973192   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.973205   20924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:35:32.086601   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:35:32.086629   20924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:35:32.086646   20924 buildroot.go:174] setting up certificates
	I0416 16:35:32.086656   20924 provision.go:84] configureAuth start
	I0416 16:35:32.086668   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:32.086899   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:32.089858   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.090257   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.090290   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.090427   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.092569   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.092881   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.092923   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.093070   20924 provision.go:143] copyHostCerts
	I0416 16:35:32.093103   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:35:32.093142   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:35:32.093154   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:35:32.093233   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:35:32.093325   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:35:32.093351   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:35:32.093360   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:35:32.093395   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:35:32.093452   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:35:32.093473   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:35:32.093483   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:35:32.093517   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:35:32.093581   20924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552-m03 san=[127.0.0.1 192.168.39.125 ha-543552-m03 localhost minikube]
	I0416 16:35:32.312980   20924 provision.go:177] copyRemoteCerts
	I0416 16:35:32.313038   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:35:32.313061   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.315541   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.315899   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.315928   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.316156   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.316374   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.316572   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.316716   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:32.396258   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:35:32.396328   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:35:32.427516   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:35:32.427588   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:35:32.458091   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:35:32.458148   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:35:32.484758   20924 provision.go:87] duration metric: took 398.089807ms to configureAuth
	I0416 16:35:32.484792   20924 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:35:32.485049   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:35:32.485143   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.487937   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.488322   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.488350   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.488560   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.488782   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.488945   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.489071   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.489242   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:32.489419   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:32.489434   20924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:35:32.782848   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:35:32.782876   20924 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:35:32.782886   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetURL
	I0416 16:35:32.784332   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Using libvirt version 6000000
	I0416 16:35:32.786671   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.787017   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.787044   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.787203   20924 main.go:141] libmachine: Docker is up and running!
	I0416 16:35:32.787214   20924 main.go:141] libmachine: Reticulating splines...
	I0416 16:35:32.787221   20924 client.go:171] duration metric: took 25.996158862s to LocalClient.Create
	I0416 16:35:32.787247   20924 start.go:167] duration metric: took 25.996226949s to libmachine.API.Create "ha-543552"
	I0416 16:35:32.787259   20924 start.go:293] postStartSetup for "ha-543552-m03" (driver="kvm2")
	I0416 16:35:32.787286   20924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:35:32.787315   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:32.787560   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:35:32.787590   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.789792   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.790137   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.790167   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.790275   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.790470   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.790628   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.790773   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:32.877265   20924 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:35:32.882431   20924 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:35:32.882454   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:35:32.882521   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:35:32.882609   20924 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:35:32.882619   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:35:32.882717   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:35:32.893123   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:35:32.919872   20924 start.go:296] duration metric: took 132.598201ms for postStartSetup
	I0416 16:35:32.919914   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetConfigRaw
	I0416 16:35:32.920543   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:32.923242   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.923656   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.923685   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.923955   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:35:32.924129   20924 start.go:128] duration metric: took 26.151272358s to createHost
	I0416 16:35:32.924151   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.926252   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.926604   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.926625   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.926763   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.926922   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.927056   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.927177   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.927336   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:32.927524   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:32.927539   20924 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:35:33.034270   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285333.016350836
	
	I0416 16:35:33.034294   20924 fix.go:216] guest clock: 1713285333.016350836
	I0416 16:35:33.034303   20924 fix.go:229] Guest: 2024-04-16 16:35:33.016350836 +0000 UTC Remote: 2024-04-16 16:35:32.924141423 +0000 UTC m=+155.159321005 (delta=92.209413ms)
	I0416 16:35:33.034322   20924 fix.go:200] guest clock delta is within tolerance: 92.209413ms
	I0416 16:35:33.034330   20924 start.go:83] releasing machines lock for "ha-543552-m03", held for 26.261595405s
	I0416 16:35:33.034351   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.034592   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:33.037469   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.037861   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:33.037892   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.040362   20924 out.go:177] * Found network options:
	I0416 16:35:33.042010   20924 out.go:177]   - NO_PROXY=192.168.39.97,192.168.39.80
	W0416 16:35:33.043447   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:35:33.043468   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:35:33.043479   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.044104   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.044320   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.044424   20924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:35:33.044460   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	W0416 16:35:33.044561   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:35:33.044586   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:35:33.044637   20924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:35:33.044656   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:33.047283   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047311   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047676   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:33.047707   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047734   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:33.047773   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047849   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:33.048018   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:33.048030   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:33.048171   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:33.048182   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:33.048364   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:33.048386   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:33.048490   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:33.285718   20924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:35:33.292682   20924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:35:33.292747   20924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:35:33.313993   20924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:35:33.314023   20924 start.go:494] detecting cgroup driver to use...
	I0416 16:35:33.314090   20924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:35:33.333251   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:35:33.350424   20924 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:35:33.350487   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:35:33.367096   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:35:33.384913   20924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:35:33.517807   20924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:35:33.690549   20924 docker.go:233] disabling docker service ...
	I0416 16:35:33.690627   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:35:33.707499   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:35:33.723438   20924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:35:33.873524   20924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:35:34.005516   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:35:34.020928   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:35:34.043005   20924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:35:34.043060   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.055243   20924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:35:34.055300   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.067675   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.079574   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.092467   20924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:35:34.105409   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.118622   20924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.138420   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.150657   20924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:35:34.163490   20924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:35:34.163536   20924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:35:34.181661   20924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:35:34.192504   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:35:34.322279   20924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:35:34.479653   20924 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:35:34.479739   20924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:35:34.485425   20924 start.go:562] Will wait 60s for crictl version
	I0416 16:35:34.485474   20924 ssh_runner.go:195] Run: which crictl
	I0416 16:35:34.490001   20924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:35:34.529434   20924 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:35:34.529520   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:35:34.559314   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:35:34.591656   20924 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:35:34.593127   20924 out.go:177]   - env NO_PROXY=192.168.39.97
	I0416 16:35:34.594502   20924 out.go:177]   - env NO_PROXY=192.168.39.97,192.168.39.80
	I0416 16:35:34.595864   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:34.598190   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:34.598537   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:34.598566   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:34.598738   20924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:35:34.603546   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:35:34.617461   20924 mustload.go:65] Loading cluster: ha-543552
	I0416 16:35:34.617672   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:35:34.617935   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:34.617980   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:34.634091   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0416 16:35:34.634551   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:34.634992   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:34.635013   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:34.635348   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:34.635513   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:35:34.637213   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:35:34.637490   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:34.637533   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:34.651768   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0416 16:35:34.652134   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:34.652545   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:34.652571   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:34.652878   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:34.653080   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:35:34.653257   20924 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.125
	I0416 16:35:34.653269   20924 certs.go:194] generating shared ca certs ...
	I0416 16:35:34.653281   20924 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:35:34.653395   20924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:35:34.653431   20924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:35:34.653437   20924 certs.go:256] generating profile certs ...
	I0416 16:35:34.653498   20924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:35:34.653523   20924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae
	I0416 16:35:34.653534   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.80 192.168.39.125 192.168.39.254]
	I0416 16:35:34.709574   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae ...
	I0416 16:35:34.709603   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae: {Name:mk072cdc0acef413d22b7ef1edd66a15ddb0f40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:35:34.709752   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae ...
	I0416 16:35:34.709763   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae: {Name:mkd18b9c565f69ea2235df7b592a2ec9e969d15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:35:34.709865   20924 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:35:34.709996   20924 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:35:34.710111   20924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:35:34.710132   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:35:34.710143   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:35:34.710156   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:35:34.710169   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:35:34.710181   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:35:34.710194   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:35:34.710205   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:35:34.710218   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:35:34.710269   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:35:34.710297   20924 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:35:34.710304   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:35:34.710322   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:35:34.710355   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:35:34.710378   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:35:34.710411   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:35:34.710439   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:35:34.710453   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:34.710465   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:35:34.710494   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:35:34.713066   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:34.713450   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:35:34.713492   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:34.713672   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:35:34.713932   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:35:34.714090   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:35:34.714245   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:35:34.789207   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0416 16:35:34.795016   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0416 16:35:34.810524   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0416 16:35:34.816281   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0416 16:35:34.831128   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0416 16:35:34.835672   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0416 16:35:34.848291   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0416 16:35:34.853159   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0416 16:35:34.867179   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0416 16:35:34.872551   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0416 16:35:34.887141   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0416 16:35:34.892181   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0416 16:35:34.903756   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:35:34.932246   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:35:34.962462   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:35:34.990572   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:35:35.021502   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0416 16:35:35.054403   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:35:35.100964   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:35:35.129372   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:35:35.157612   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:35:35.185342   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:35:35.215312   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:35:35.243164   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0416 16:35:35.261925   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0416 16:35:35.281377   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0416 16:35:35.299714   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0416 16:35:35.317583   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0416 16:35:35.336376   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0416 16:35:35.357747   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0416 16:35:35.376246   20924 ssh_runner.go:195] Run: openssl version
	I0416 16:35:35.382518   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:35:35.394859   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:35:35.399721   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:35:35.399768   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:35:35.406048   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:35:35.418418   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:35:35.430377   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:35.435138   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:35.435174   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:35.441152   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:35:35.453110   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:35:35.465021   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:35:35.470001   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:35:35.470051   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:35:35.476854   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:35:35.489349   20924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:35:35.494043   20924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:35:35.494094   20924 kubeadm.go:928] updating node {m03 192.168.39.125 8443 v1.29.3 crio true true} ...
	I0416 16:35:35.494168   20924 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:35:35.494191   20924 kube-vip.go:111] generating kube-vip config ...
	I0416 16:35:35.494218   20924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:35:35.516064   20924 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:35:35.516138   20924 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:35:35.516183   20924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:35:35.528565   20924 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 16:35:35.528629   20924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 16:35:35.539578   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 16:35:35.539601   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0416 16:35:35.539604   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0416 16:35:35.539627   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:35:35.539645   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:35:35.539687   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:35:35.539603   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:35:35.539780   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:35:35.561322   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:35:35.561339   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 16:35:35.561372   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 16:35:35.561410   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 16:35:35.561434   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:35:35.561435   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 16:35:35.608684   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 16:35:35.608738   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 16:35:36.667671   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0416 16:35:36.679249   20924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0416 16:35:36.698558   20924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:35:36.719019   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 16:35:36.739131   20924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:35:36.744488   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:35:36.758661   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:35:36.895492   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:35:36.917404   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:35:36.917748   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:36.917798   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:36.933238   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0416 16:35:36.933754   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:36.934866   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:36.934898   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:36.935262   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:36.935493   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:35:36.935673   20924 start.go:316] joinCluster: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:35:36.935878   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 16:35:36.935900   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:35:36.939484   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:36.939956   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:35:36.939983   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:36.940145   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:35:36.940439   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:35:36.940648   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:35:36.940833   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:35:37.128696   20924 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:35:37.128753   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s7zeen.uafa4z2skhbmlwz6 --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m03 --control-plane --apiserver-advertise-address=192.168.39.125 --apiserver-bind-port=8443"
	I0416 16:36:04.483809   20924 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s7zeen.uafa4z2skhbmlwz6 --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m03 --control-plane --apiserver-advertise-address=192.168.39.125 --apiserver-bind-port=8443": (27.355027173s)
	I0416 16:36:04.483863   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 16:36:05.288728   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-543552-m03 minikube.k8s.io/updated_at=2024_04_16T16_36_05_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-543552 minikube.k8s.io/primary=false
	I0416 16:36:05.449498   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-543552-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0416 16:36:05.587389   20924 start.go:318] duration metric: took 28.651723514s to joinCluster
	I0416 16:36:05.587463   20924 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:36:05.590020   20924 out.go:177] * Verifying Kubernetes components...
	I0416 16:36:05.587773   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:36:05.591461   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:36:05.987479   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:36:06.122102   20924 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:36:06.122434   20924 kapi.go:59] client config for ha-543552: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0416 16:36:06.122527   20924 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0416 16:36:06.122803   20924 node_ready.go:35] waiting up to 6m0s for node "ha-543552-m03" to be "Ready" ...
	I0416 16:36:06.122910   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:06.122923   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:06.122934   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:06.122943   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:06.127150   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:06.623812   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:06.623845   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:06.623856   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:06.623862   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:06.628160   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:07.122994   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:07.123018   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:07.123026   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:07.123030   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:07.126714   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:07.624041   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:07.624068   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:07.624079   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:07.624086   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:07.627483   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:08.123097   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:08.123123   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:08.123134   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:08.123139   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:08.127073   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:08.127729   20924 node_ready.go:53] node "ha-543552-m03" has status "Ready":"False"
	I0416 16:36:08.623070   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:08.623091   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:08.623099   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:08.623104   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:08.626939   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.122983   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.123007   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.123015   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.123020   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.127281   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:09.127989   20924 node_ready.go:49] node "ha-543552-m03" has status "Ready":"True"
	I0416 16:36:09.128008   20924 node_ready.go:38] duration metric: took 3.005185285s for node "ha-543552-m03" to be "Ready" ...
	I0416 16:36:09.128016   20924 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:36:09.128073   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:09.128085   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.128096   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.128102   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.135478   20924 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 16:36:09.144960   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.145046   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-k7bn7
	I0416 16:36:09.145058   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.145068   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.145076   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.149788   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:09.150463   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:09.150477   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.150485   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.150490   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.153659   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.154420   20924 pod_ready.go:92] pod "coredns-76f75df574-k7bn7" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.154436   20924 pod_ready.go:81] duration metric: took 9.447894ms for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.154446   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.154506   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-l9zck
	I0416 16:36:09.154517   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.154527   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.154533   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.158503   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.159531   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:09.159545   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.159553   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.159558   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.162209   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.162925   20924 pod_ready.go:92] pod "coredns-76f75df574-l9zck" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.162943   20924 pod_ready.go:81] duration metric: took 8.48929ms for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.162953   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.163004   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552
	I0416 16:36:09.163014   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.163024   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.163029   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.165608   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.166064   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:09.166079   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.166088   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.166093   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.168339   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.168814   20924 pod_ready.go:92] pod "etcd-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.168829   20924 pod_ready.go:81] duration metric: took 5.869427ms for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.168849   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.168931   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:36:09.168944   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.168955   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.168964   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.171585   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.172130   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:09.172146   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.172154   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.172160   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.174820   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.175478   20924 pod_ready.go:92] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.175498   20924 pod_ready.go:81] duration metric: took 6.639989ms for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.175508   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.323858   20924 request.go:629] Waited for 148.299942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:09.323950   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:09.323962   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.323973   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.323980   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.329019   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:09.523142   20924 request.go:629] Waited for 193.311389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.523208   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.523227   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.523236   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.523242   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.527249   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.723322   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:09.723342   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.723350   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.723354   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.727240   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.923336   20924 request.go:629] Waited for 195.327557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.923397   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.923402   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.923409   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.923413   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.927899   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:10.176106   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:10.176131   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.176139   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.176143   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.181585   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:10.323951   20924 request.go:629] Waited for 141.287059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:10.324000   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:10.324005   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.324011   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.324015   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.327902   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:10.675975   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:10.675999   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.676011   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.676017   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.680040   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:10.723378   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:10.723399   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.723407   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.723425   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.726926   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:11.175783   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:11.175808   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.175823   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.175829   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.179579   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:11.180495   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:11.180514   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.180523   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.180530   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.184004   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:11.184755   20924 pod_ready.go:102] pod "etcd-ha-543552-m03" in "kube-system" namespace has status "Ready":"False"
	I0416 16:36:11.676103   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:11.676130   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.676139   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.676144   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.681891   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:11.683079   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:11.683107   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.683114   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.683121   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.687544   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:12.176254   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:12.176279   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.176286   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.176291   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.180308   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:12.181152   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:12.181170   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.181180   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.181187   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.186396   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:12.675749   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:12.675772   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.675780   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.675783   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.679620   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:12.680994   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:12.681011   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.681025   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.681030   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.684055   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:12.684803   20924 pod_ready.go:92] pod "etcd-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:12.684821   20924 pod_ready.go:81] duration metric: took 3.509304679s for pod "etcd-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.684858   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.684921   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552
	I0416 16:36:12.684933   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.684943   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.684954   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.687766   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:12.723396   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:12.723427   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.723436   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.723440   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.727021   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:12.727895   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:12.727915   20924 pod_ready.go:81] duration metric: took 43.047665ms for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.727923   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.923045   20924 request.go:629] Waited for 195.069258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m02
	I0416 16:36:12.923115   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m02
	I0416 16:36:12.923121   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.923133   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.923141   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.926403   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.123571   20924 request.go:629] Waited for 196.280829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:13.123651   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:13.123656   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.123663   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.123669   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.127684   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:13.128350   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:13.128372   20924 pod_ready.go:81] duration metric: took 400.441626ms for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:13.128384   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:13.323549   20924 request.go:629] Waited for 195.098361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.323632   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.323655   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.323683   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.323693   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.327672   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.523029   20924 request.go:629] Waited for 194.288109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.523079   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.523084   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.523090   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.523094   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.526484   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.723363   20924 request.go:629] Waited for 94.257303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.723436   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.723443   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.723452   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.723457   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.727320   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.923595   20924 request.go:629] Waited for 195.168474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.923671   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.923683   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.923694   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.923706   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.927592   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:14.129278   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:14.129298   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.129305   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.129308   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.133946   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:14.323736   20924 request.go:629] Waited for 189.02048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.323790   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.323796   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.323803   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.323810   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.327507   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:14.629362   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:14.629388   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.629399   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.629406   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.632777   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:14.723546   20924 request.go:629] Waited for 89.51166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.723596   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.723602   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.723610   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.723619   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.727872   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:15.129300   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:15.129327   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.129334   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.129338   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.132883   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:15.133980   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:15.133992   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.134002   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.134005   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.137163   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:15.137850   20924 pod_ready.go:102] pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace has status "Ready":"False"
	I0416 16:36:15.628670   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:15.628693   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.628704   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.628711   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.632329   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:15.633285   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:15.633307   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.633317   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.633323   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.636004   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:16.129516   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:16.129536   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.129543   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.129548   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.133656   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:16.134363   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:16.134379   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.134387   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.134390   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.137704   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.138279   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:16.138299   20924 pod_ready.go:81] duration metric: took 3.00990684s for pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.138308   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.138361   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552
	I0416 16:36:16.138375   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.138385   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.138397   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.141484   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.323665   20924 request.go:629] Waited for 181.35503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:16.323757   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:16.323766   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.323775   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.323782   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.327198   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.327855   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:16.327884   20924 pod_ready.go:81] duration metric: took 189.565043ms for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.327897   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.523348   20924 request.go:629] Waited for 195.380108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:36:16.523419   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:36:16.523424   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.523431   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.523435   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.527155   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.723535   20924 request.go:629] Waited for 195.402961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:16.723598   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:16.723603   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.723622   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.723639   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.728059   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:16.728678   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:16.728701   20924 pod_ready.go:81] duration metric: took 400.794948ms for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.728713   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.924003   20924 request.go:629] Waited for 195.211261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m03
	I0416 16:36:16.924064   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m03
	I0416 16:36:16.924071   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.924081   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.924095   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.927848   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:17.123301   20924 request.go:629] Waited for 194.363347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.123354   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.123359   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.123366   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.123370   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.128474   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:17.129179   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:17.129206   20924 pod_ready.go:81] duration metric: took 400.480248ms for pod "kube-controller-manager-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.129216   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.323361   20924 request.go:629] Waited for 194.081395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:36:17.323501   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:36:17.323514   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.323523   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.323529   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.329145   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:17.523635   20924 request.go:629] Waited for 193.373878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:17.523684   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:17.523689   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.523695   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.523700   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.528716   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:17.530195   20924 pod_ready.go:92] pod "kube-proxy-2vkts" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:17.530213   20924 pod_ready.go:81] duration metric: took 400.991105ms for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.530221   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ncrw" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.723453   20924 request.go:629] Waited for 193.159148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ncrw
	I0416 16:36:17.723517   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ncrw
	I0416 16:36:17.723522   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.723529   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.723534   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.727525   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:17.923833   20924 request.go:629] Waited for 195.411309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.923912   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.923918   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.923928   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.923933   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.927566   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:17.928504   20924 pod_ready.go:92] pod "kube-proxy-9ncrw" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:17.928521   20924 pod_ready.go:81] duration metric: took 398.294345ms for pod "kube-proxy-9ncrw" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.928532   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.123866   20924 request.go:629] Waited for 195.243048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:36:18.124004   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:36:18.124029   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.124041   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.124049   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.134748   20924 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0416 16:36:18.323043   20924 request.go:629] Waited for 187.276686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.323097   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.323104   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.323114   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.323120   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.329580   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:36:18.330975   20924 pod_ready.go:92] pod "kube-proxy-c9lhc" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:18.330993   20924 pod_ready.go:81] duration metric: took 402.454383ms for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.331002   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.523032   20924 request.go:629] Waited for 191.95579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:36:18.523084   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:36:18.523089   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.523101   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.523105   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.526867   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:18.723964   20924 request.go:629] Waited for 196.356109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.724034   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.724039   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.724046   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.724051   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.727800   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:18.728600   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:18.728628   20924 pod_ready.go:81] duration metric: took 397.620125ms for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.728638   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.923696   20924 request.go:629] Waited for 194.996162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:36:18.923756   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:36:18.923761   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.923768   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.923772   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.927792   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:19.123889   20924 request.go:629] Waited for 195.353903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:19.123940   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:19.123946   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.123952   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.123956   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.129981   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:36:19.130969   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:19.130986   20924 pod_ready.go:81] duration metric: took 402.341731ms for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:19.130999   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:19.323070   20924 request.go:629] Waited for 191.983625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m03
	I0416 16:36:19.323159   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m03
	I0416 16:36:19.323168   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.323175   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.323179   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.327476   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:19.523769   20924 request.go:629] Waited for 195.362415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:19.523872   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:19.523884   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.523895   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.523902   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.527429   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:19.528335   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:19.528353   20924 pod_ready.go:81] duration metric: took 397.346043ms for pod "kube-scheduler-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:19.528363   20924 pod_ready.go:38] duration metric: took 10.400339257s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:36:19.528376   20924 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:36:19.528419   20924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:36:19.547602   20924 api_server.go:72] duration metric: took 13.960104549s to wait for apiserver process to appear ...
	I0416 16:36:19.547624   20924 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:36:19.547651   20924 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0416 16:36:19.554523   20924 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0416 16:36:19.554582   20924 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0416 16:36:19.554592   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.554602   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.554611   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.555911   20924 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 16:36:19.555971   20924 api_server.go:141] control plane version: v1.29.3
	I0416 16:36:19.555991   20924 api_server.go:131] duration metric: took 8.353386ms to wait for apiserver health ...
	I0416 16:36:19.555997   20924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:36:19.723982   20924 request.go:629] Waited for 167.93243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:19.724063   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:19.724079   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.724088   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.724098   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.733319   20924 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0416 16:36:19.739866   20924 system_pods.go:59] 24 kube-system pods found
	I0416 16:36:19.739897   20924 system_pods.go:61] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:36:19.739903   20924 system_pods.go:61] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:36:19.739906   20924 system_pods.go:61] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:36:19.739909   20924 system_pods.go:61] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:36:19.739912   20924 system_pods.go:61] "etcd-ha-543552-m03" [6634160f-7d48-4458-8628-2b3f340d8810] Running
	I0416 16:36:19.739915   20924 system_pods.go:61] "kindnet-6wbkm" [1aa2a9c0-7c95-49ca-817d-1dfaaff56145] Running
	I0416 16:36:19.739918   20924 system_pods.go:61] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:36:19.739922   20924 system_pods.go:61] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:36:19.739926   20924 system_pods.go:61] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:36:19.739931   20924 system_pods.go:61] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:36:19.739939   20924 system_pods.go:61] "kube-apiserver-ha-543552-m03" [e20ae43c-f3ac-45fc-a7ac-2b193c0e4a59] Running
	I0416 16:36:19.739945   20924 system_pods.go:61] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:36:19.739957   20924 system_pods.go:61] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:36:19.739962   20924 system_pods.go:61] "kube-controller-manager-ha-543552-m03" [779ae963-1dfb-4d6e-bf23-c49a60880bdd] Running
	I0416 16:36:19.739968   20924 system_pods.go:61] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:36:19.739977   20924 system_pods.go:61] "kube-proxy-9ncrw" [7c22a15b-35f1-4a08-b5ad-889f7d14706c] Running
	I0416 16:36:19.739982   20924 system_pods.go:61] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:36:19.739987   20924 system_pods.go:61] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:36:19.739992   20924 system_pods.go:61] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:36:19.739997   20924 system_pods.go:61] "kube-scheduler-ha-543552-m03" [4b562a1e-9bba-4208-b04d-a0dbee0c9e7e] Running
	I0416 16:36:19.740002   20924 system_pods.go:61] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:36:19.740006   20924 system_pods.go:61] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:36:19.740011   20924 system_pods.go:61] "kube-vip-ha-543552-m03" [cca4c658-0439-4cef-b7f9-b8cc2b66a222] Running
	I0416 16:36:19.740016   20924 system_pods.go:61] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:36:19.740024   20924 system_pods.go:74] duration metric: took 184.020561ms to wait for pod list to return data ...
	I0416 16:36:19.740035   20924 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:36:19.923429   20924 request.go:629] Waited for 183.326312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:36:19.923500   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:36:19.923505   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.923513   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.923516   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.927571   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:19.927759   20924 default_sa.go:45] found service account: "default"
	I0416 16:36:19.927781   20924 default_sa.go:55] duration metric: took 187.737838ms for default service account to be created ...
	I0416 16:36:19.927790   20924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:36:20.123346   20924 request.go:629] Waited for 195.490445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:20.123407   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:20.123412   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:20.123419   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:20.123424   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:20.132410   20924 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 16:36:20.139891   20924 system_pods.go:86] 24 kube-system pods found
	I0416 16:36:20.139917   20924 system_pods.go:89] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:36:20.139923   20924 system_pods.go:89] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:36:20.139927   20924 system_pods.go:89] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:36:20.139931   20924 system_pods.go:89] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:36:20.139936   20924 system_pods.go:89] "etcd-ha-543552-m03" [6634160f-7d48-4458-8628-2b3f340d8810] Running
	I0416 16:36:20.139940   20924 system_pods.go:89] "kindnet-6wbkm" [1aa2a9c0-7c95-49ca-817d-1dfaaff56145] Running
	I0416 16:36:20.139945   20924 system_pods.go:89] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:36:20.139948   20924 system_pods.go:89] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:36:20.139952   20924 system_pods.go:89] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:36:20.139956   20924 system_pods.go:89] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:36:20.139960   20924 system_pods.go:89] "kube-apiserver-ha-543552-m03" [e20ae43c-f3ac-45fc-a7ac-2b193c0e4a59] Running
	I0416 16:36:20.139965   20924 system_pods.go:89] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:36:20.139972   20924 system_pods.go:89] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:36:20.139976   20924 system_pods.go:89] "kube-controller-manager-ha-543552-m03" [779ae963-1dfb-4d6e-bf23-c49a60880bdd] Running
	I0416 16:36:20.139982   20924 system_pods.go:89] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:36:20.139986   20924 system_pods.go:89] "kube-proxy-9ncrw" [7c22a15b-35f1-4a08-b5ad-889f7d14706c] Running
	I0416 16:36:20.139992   20924 system_pods.go:89] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:36:20.139996   20924 system_pods.go:89] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:36:20.140002   20924 system_pods.go:89] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:36:20.140006   20924 system_pods.go:89] "kube-scheduler-ha-543552-m03" [4b562a1e-9bba-4208-b04d-a0dbee0c9e7e] Running
	I0416 16:36:20.140013   20924 system_pods.go:89] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:36:20.140016   20924 system_pods.go:89] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:36:20.140022   20924 system_pods.go:89] "kube-vip-ha-543552-m03" [cca4c658-0439-4cef-b7f9-b8cc2b66a222] Running
	I0416 16:36:20.140025   20924 system_pods.go:89] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:36:20.140035   20924 system_pods.go:126] duration metric: took 212.238596ms to wait for k8s-apps to be running ...
	I0416 16:36:20.140044   20924 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:36:20.140087   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:36:20.158413   20924 system_svc.go:56] duration metric: took 18.358997ms WaitForService to wait for kubelet
	I0416 16:36:20.158453   20924 kubeadm.go:576] duration metric: took 14.570948499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:36:20.158476   20924 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:36:20.322977   20924 request.go:629] Waited for 164.434484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0416 16:36:20.323048   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0416 16:36:20.323053   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:20.323061   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:20.323068   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:20.327426   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:20.328773   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:36:20.328798   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:36:20.328811   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:36:20.328816   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:36:20.328819   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:36:20.328823   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:36:20.328826   20924 node_conditions.go:105] duration metric: took 170.345289ms to run NodePressure ...
	I0416 16:36:20.328853   20924 start.go:240] waiting for startup goroutines ...
	I0416 16:36:20.328880   20924 start.go:254] writing updated cluster config ...
	I0416 16:36:20.329168   20924 ssh_runner.go:195] Run: rm -f paused
	I0416 16:36:20.385164   20924 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 16:36:20.387211   20924 out.go:177] * Done! kubectl is now configured to use "ha-543552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.630848140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713285589630826036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72856b55-5c9b-4ea4-9c31-7f6edfe900f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.631522296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bcf7a15-1db9-4ad6-a5d9-5c087c639743 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.631597954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bcf7a15-1db9-4ad6-a5d9-5c087c639743 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.631910769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bcf7a15-1db9-4ad6-a5d9-5c087c639743 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.673535202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c041c198-0048-4244-8cd0-107f2242888d name=/runtime.v1.RuntimeService/Version
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.673606890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c041c198-0048-4244-8cd0-107f2242888d name=/runtime.v1.RuntimeService/Version
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.676231909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71879a39-6e73-4e1d-9ba3-0e4da5aba401 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.676641816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713285589676613988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71879a39-6e73-4e1d-9ba3-0e4da5aba401 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.678044029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c58715a-e18c-4888-b7e0-7424040738bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.678096983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c58715a-e18c-4888-b7e0-7424040738bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.678318029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c58715a-e18c-4888-b7e0-7424040738bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.720745731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25326499-7d1f-496b-abb6-13afe7da7cbf name=/runtime.v1.RuntimeService/Version
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.720819412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25326499-7d1f-496b-abb6-13afe7da7cbf name=/runtime.v1.RuntimeService/Version
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.722081589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91a265af-5518-436b-a231-409695e40f9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.722527426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713285589722504138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91a265af-5518-436b-a231-409695e40f9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.723114105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f786031-7727-410b-9329-e42b7d032a62 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.723166111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f786031-7727-410b-9329-e42b7d032a62 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.723373812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f786031-7727-410b-9329-e42b7d032a62 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.764898163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1056e73-38b9-4d95-97cd-91d070bcc6b9 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.765057175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1056e73-38b9-4d95-97cd-91d070bcc6b9 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.766874183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0f4a8a3-2386-4a6c-bf1d-616e34f4f3cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.767425145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713285589767401122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0f4a8a3-2386-4a6c-bf1d-616e34f4f3cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.768072995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b82dfa3-39ae-4b58-9641-3dbf57dbdc56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.768149821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b82dfa3-39ae-4b58-9641-3dbf57dbdc56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:39:49 ha-543552 crio[680]: time="2024-04-16 16:39:49.768416515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b82dfa3-39ae-4b58-9641-3dbf57dbdc56 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4eff3ed28c1a6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0a4cbed3518bb       busybox-7fdf7869d9-zmcc2
	a326689cf68a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   7d0e2bbea0507       coredns-76f75df574-l9zck
	e0c5cf1df494c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   a709b13969634       storage-provisioner
	e82d4c4b6df66       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   3c0b61b8ba2ff       coredns-76f75df574-k7bn7
	c2c331bf17fe8       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               0                   2b6c3518676ac       kindnet-7hwtp
	697fe1db84b5d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      5 minutes ago       Running             kube-proxy                0                   016912d243f9d       kube-proxy-c9lhc
	b4d4b03694327       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   d742d545e022a       kube-vip-ha-543552
	495afba1f7549       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      6 minutes ago       Running             kube-controller-manager   0                   564e47e5a81fc       kube-controller-manager-ha-543552
	ce9f179d540bc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   f5aa5ed306340       etcd-ha-543552
	5f7d02aab74a8       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      6 minutes ago       Running             kube-scheduler            0                   158c5349515db       kube-scheduler-ha-543552
	80fb22fd3cc49       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      6 minutes ago       Running             kube-apiserver            0                   bbd97783ca669       kube-apiserver-ha-543552
	
	
	==> coredns [a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324] <==
	[INFO] 10.244.0.4:59922 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000511841s
	[INFO] 10.244.2.2:48182 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004454115s
	[INFO] 10.244.2.2:44194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000236966s
	[INFO] 10.244.2.2:39038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163174s
	[INFO] 10.244.2.2:42477 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002852142s
	[INFO] 10.244.2.2:47206 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189393s
	[INFO] 10.244.1.2:55215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293483s
	[INFO] 10.244.1.2:55166 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111209s
	[INFO] 10.244.1.2:36437 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001400626s
	[INFO] 10.244.1.2:38888 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185603s
	[INFO] 10.244.0.4:46391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104951s
	[INFO] 10.244.0.4:59290 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001608985s
	[INFO] 10.244.0.4:39400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075172s
	[INFO] 10.244.2.2:50417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152413s
	[INFO] 10.244.2.2:51697 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216701s
	[INFO] 10.244.2.2:46301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158413s
	[INFO] 10.244.1.2:58450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001388s
	[INFO] 10.244.1.2:43346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108795s
	[INFO] 10.244.0.4:44420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074923s
	[INFO] 10.244.0.4:51452 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107645s
	[INFO] 10.244.2.2:44963 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121222s
	[INFO] 10.244.2.2:46302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00020113s
	[INFO] 10.244.2.2:51995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000170275s
	[INFO] 10.244.0.4:40157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126298s
	[INFO] 10.244.0.4:54438 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176652s
	
	
	==> coredns [e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108] <==
	[INFO] 10.244.0.4:49242 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001984411s
	[INFO] 10.244.2.2:34467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272048s
	[INFO] 10.244.2.2:45332 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000229408s
	[INFO] 10.244.2.2:36963 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170135s
	[INFO] 10.244.1.2:42830 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002119141s
	[INFO] 10.244.1.2:44539 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228353s
	[INFO] 10.244.1.2:42961 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000595811s
	[INFO] 10.244.1.2:46668 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010375s
	[INFO] 10.244.0.4:42508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000269602s
	[INFO] 10.244.0.4:33007 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845252s
	[INFO] 10.244.0.4:45175 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124293s
	[INFO] 10.244.0.4:37034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123057s
	[INFO] 10.244.0.4:56706 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077781s
	[INFO] 10.244.2.2:48795 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014109s
	[INFO] 10.244.1.2:60733 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013497s
	[INFO] 10.244.1.2:47606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137564s
	[INFO] 10.244.0.4:43266 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102784s
	[INFO] 10.244.0.4:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161303s
	[INFO] 10.244.2.2:35260 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000298984s
	[INFO] 10.244.1.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119878s
	[INFO] 10.244.1.2:44462 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168252s
	[INFO] 10.244.1.2:50323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147657s
	[INFO] 10.244.1.2:51016 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131163s
	[INFO] 10.244.0.4:50260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114104s
	[INFO] 10.244.0.4:37053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068482s
	
	
	==> describe nodes <==
	Name:               ha-543552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_33_41_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:33:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:39:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-543552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dd8560d23a945a5aa6d3b02a2c3dc1b
	  System UUID:                6dd8560d-23a9-45a5-aa6d-3b02a2c3dc1b
	  Boot ID:                    7c97db37-f0b9-4406-9537-1480d467974d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zmcc2             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-76f75df574-k7bn7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m56s
	  kube-system                 coredns-76f75df574-l9zck             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m56s
	  kube-system                 etcd-ha-543552                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m9s
	  kube-system                 kindnet-7hwtp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m56s
	  kube-system                 kube-apiserver-ha-543552             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-controller-manager-ha-543552    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-proxy-c9lhc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-scheduler-ha-543552             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-vip-ha-543552                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m53s  kube-proxy       
	  Normal  Starting                 6m9s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m9s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m9s   kubelet          Node ha-543552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s   kubelet          Node ha-543552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s   kubelet          Node ha-543552 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m56s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal  NodeReady                5m52s  kubelet          Node ha-543552 status is now: NodeReady
	  Normal  RegisteredNode           4m40s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal  RegisteredNode           3m32s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	
	
	Name:               ha-543552-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_34_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:34:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:37:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-543552-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2f4c6e70b7c46048863edfff3e863df
	  System UUID:                e2f4c6e7-0b7c-4604-8863-edfff3e863df
	  Boot ID:                    c70dbd0c-349c-4713-a6b1-4fa48198aed0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7wbjg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-543552-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-q4275                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m
	  kube-system                 kube-apiserver-ha-543552-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-ha-543552-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-2vkts                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-ha-543552-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-vip-ha-543552-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m56s            kube-proxy       
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node ha-543552-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x7 over 5m)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s            node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           4m40s            node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           3m32s            node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  NodeNotReady             106s             node-controller  Node ha-543552-m02 status is now: NodeNotReady
	
	
	Name:               ha-543552-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_36_05_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:39:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:35:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:35:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:35:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    ha-543552-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 affc17c9d3664ffba11e272d96fa3d10
	  System UUID:                affc17c9-d366-4ffb-a11e-272d96fa3d10
	  Boot ID:                    42171959-bc11-46c0-9578-af565ce67aa6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2prpr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-543552-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-6wbkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m51s
	  kube-system                 kube-apiserver-ha-543552-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ha-543552-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-proxy-9ncrw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-scheduler-ha-543552-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-vip-ha-543552-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node ha-543552-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal  RegisteredNode           3m32s                  node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	
	
	Name:               ha-543552-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_36_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:36:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:39:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:36:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:36:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:36:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:37:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-543552-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f46fde69f5e74ab18cd1001a10200bfb
	  System UUID:                f46fde69-f5e7-4ab1-8cd1-001a10200bfb
	  Boot ID:                    99a101a4-1c3b-4821-84ee-6c1ffce7c674
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hghz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m46s
	  kube-system                 kube-proxy-g5pqm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x2 over 2m53s)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x2 over 2m53s)  kubelet          Node ha-543552-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x2 over 2m53s)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-543552-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr16 16:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051391] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043432] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.624403] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.493869] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.688655] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.068457] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.060006] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073697] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.185591] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.154095] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.315435] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.805735] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.066066] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.494086] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.897359] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.972784] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.095897] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.136469] kauditd_printk_skb: 21 callbacks suppressed
	[Apr16 16:34] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1] <==
	{"level":"warn","ts":"2024-04-16T16:39:49.863144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:49.868524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:49.969437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.150277Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.160525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.169923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.171518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.177186Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.182753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.194408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.202223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.20981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.213219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.216577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.224119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.23082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.239138Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.245862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.248507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.249536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.253846Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.261233Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.269331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.276742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:39:50.287379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 16:39:50 up 6 min,  0 users,  load average: 0.60, 0.37, 0.17
	Linux ha-543552 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555] <==
	I0416 16:39:18.231490       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:39:28.246376       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:39:28.246471       1 main.go:227] handling current node
	I0416 16:39:28.246483       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:39:28.246490       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:39:28.246597       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:39:28.246628       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:39:28.246677       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:39:28.246682       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:39:38.262004       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:39:38.262053       1 main.go:227] handling current node
	I0416 16:39:38.262124       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:39:38.262132       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:39:38.262235       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:39:38.262272       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:39:38.262326       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:39:38.262332       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:39:48.270134       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:39:48.270685       1 main.go:227] handling current node
	I0416 16:39:48.270752       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:39:48.270782       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:39:48.270932       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:39:48.271045       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:39:48.271138       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:39:48.271159       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88] <==
	I0416 16:33:37.777459       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:33:37.781749       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 16:33:37.781771       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:33:37.784462       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:33:37.786691       1 cache.go:39] Caches are synced for autoregister controller
	E0416 16:33:37.787068       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0416 16:33:38.026573       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:33:38.588283       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:33:38.592876       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:33:38.592933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:33:39.226354       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:33:39.274469       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:33:39.415784       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:33:39.424885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.97]
	I0416 16:33:39.425765       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:33:39.430379       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:33:39.632050       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:33:41.220584       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:33:41.242589       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:33:41.252415       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:33:54.144612       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:33:54.181370       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 16:36:59.248633       1 trace.go:236] Trace[1774261545]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3f2c4d34-3af7-4df0-a83b-fdc32a1eed32,client:192.168.39.126,api-group:,api-version:v1,name:kube-proxy-tskwl,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-tskwl,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:DELETE (16-Apr-2024 16:36:58.735) (total time: 513ms):
	Trace[1774261545]: ---"Object deleted from database" 315ms (16:36:59.248)
	Trace[1774261545]: [513.338309ms] [513.338309ms] END
	
	
	==> kube-controller-manager [495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e] <==
	I0416 16:36:58.153399       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-mlxgv"
	I0416 16:36:59.194930       1 event.go:376] "Event occurred" object="ha-543552-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller"
	I0416 16:36:59.435527       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-543552-m04"
	I0416 16:36:59.700609       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zqhwm"
	I0416 16:36:59.857558       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-zqhwm"
	I0416 16:36:59.886282       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-s5k75"
	I0416 16:37:02.208878       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k52cr"
	I0416 16:37:02.334831       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-lvsz7"
	I0416 16:37:02.354321       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-bg4d8"
	I0416 16:37:04.214159       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fp4tj"
	I0416 16:37:04.302710       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-clv7c"
	I0416 16:37:04.302777       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-fp4tj"
	I0416 16:37:08.253630       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-543552-m04"
	I0416 16:38:04.473730       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-543552-m04"
	I0416 16:38:04.477180       1 event.go:376] "Event occurred" object="ha-543552-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-543552-m02 status is now: NodeNotReady"
	I0416 16:38:04.496231       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.516013       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.534379       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.557115       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.574568       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-7wbjg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.599131       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-2vkts" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.623732       1 event.go:376] "Event occurred" object="kube-system/kindnet-q4275" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.651853       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.669393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.435225ms"
	I0416 16:38:04.669657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.164µs"
	
	
	==> kube-proxy [697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18] <==
	I0416 16:33:56.602032       1 server_others.go:72] "Using iptables proxy"
	I0416 16:33:56.640935       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0416 16:33:56.707637       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:33:56.707703       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:33:56.707720       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:33:56.712410       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:33:56.713718       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:33:56.713785       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:33:56.721082       1 config.go:188] "Starting service config controller"
	I0416 16:33:56.721372       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:33:56.721448       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:33:56.721522       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:33:56.723915       1 config.go:315] "Starting node config controller"
	I0416 16:33:56.725460       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:33:56.822614       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:33:56.822738       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:33:56.825934       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9] <==
	W0416 16:33:38.946529       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:33:38.946586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:33:41.203343       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 16:36:57.928047       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s25jv\": pod kindnet-s25jv is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s25jv" node="ha-543552-m04"
	E0416 16:36:57.928562       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 3c985da9-dade-474a-ab1f-75843d9b0fd6(kube-system/kindnet-s25jv) wasn't assumed so cannot be forgotten"
	E0416 16:36:57.928749       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s25jv\": pod kindnet-s25jv is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-s25jv"
	I0416 16:36:57.928829       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s25jv" node="ha-543552-m04"
	E0416 16:36:57.929174       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g5pqm\": pod kube-proxy-g5pqm is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g5pqm" node="ha-543552-m04"
	E0416 16:36:57.929301       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod ffb4dcbe-b292-4915-b82b-c71e58f6de69(kube-system/kube-proxy-g5pqm) wasn't assumed so cannot be forgotten"
	E0416 16:36:57.929334       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g5pqm\": pod kube-proxy-g5pqm is already assigned to node \"ha-543552-m04\"" pod="kube-system/kube-proxy-g5pqm"
	I0416 16:36:57.929348       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-g5pqm" node="ha-543552-m04"
	E0416 16:36:58.057395       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mlxgv\": pod kube-proxy-mlxgv is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mlxgv" node="ha-543552-m04"
	E0416 16:36:58.057719       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mlxgv\": pod kube-proxy-mlxgv is already assigned to node \"ha-543552-m04\"" pod="kube-system/kube-proxy-mlxgv"
	E0416 16:36:59.730620       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ntsjq\": pod kindnet-ntsjq is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ntsjq" node="ha-543552-m04"
	E0416 16:36:59.730711       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod a3055093-18f1-4a2c-80e2-4d5809d6628e(kube-system/kindnet-ntsjq) wasn't assumed so cannot be forgotten"
	E0416 16:36:59.730751       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ntsjq\": pod kindnet-ntsjq is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-ntsjq"
	I0416 16:36:59.730773       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ntsjq" node="ha-543552-m04"
	E0416 16:36:59.735334       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s5k75\": pod kindnet-s5k75 is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s5k75" node="ha-543552-m04"
	E0416 16:36:59.735423       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 3050441c-9f24-42fe-83c1-883f4c9ffc17(kube-system/kindnet-s5k75) wasn't assumed so cannot be forgotten"
	E0416 16:36:59.735455       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s5k75\": pod kindnet-s5k75 is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-s5k75"
	I0416 16:36:59.735480       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s5k75" node="ha-543552-m04"
	E0416 16:37:02.233861       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k52cr\": pod kindnet-k52cr is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k52cr" node="ha-543552-m04"
	E0416 16:37:02.236277       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 862e02ec-536d-4056-a442-98f377da86b2(kube-system/kindnet-k52cr) wasn't assumed so cannot be forgotten"
	E0416 16:37:02.236508       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k52cr\": pod kindnet-k52cr is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-k52cr"
	I0416 16:37:02.236615       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k52cr" node="ha-543552-m04"
	
	
	==> kubelet <==
	Apr 16 16:35:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:35:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:35:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:36:21 ha-543552 kubelet[1371]: I0416 16:36:21.355759    1371 topology_manager.go:215] "Topology Admit Handler" podUID="861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c" podNamespace="default" podName="busybox-7fdf7869d9-zmcc2"
	Apr 16 16:36:21 ha-543552 kubelet[1371]: I0416 16:36:21.435905    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcw8t\" (UniqueName: \"kubernetes.io/projected/861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c-kube-api-access-mcw8t\") pod \"busybox-7fdf7869d9-zmcc2\" (UID: \"861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c\") " pod="default/busybox-7fdf7869d9-zmcc2"
	Apr 16 16:36:41 ha-543552 kubelet[1371]: E0416 16:36:41.439294    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:36:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:36:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:36:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:36:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:37:41 ha-543552 kubelet[1371]: E0416 16:37:41.434470    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:37:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:37:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:37:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:37:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:38:41 ha-543552 kubelet[1371]: E0416 16:38:41.432921    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:38:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:38:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:38:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:38:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:39:41 ha-543552 kubelet[1371]: E0416 16:39:41.435149    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:39:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:39:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:39:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:39:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-543552 -n ha-543552
helpers_test.go:261: (dbg) Run:  kubectl --context ha-543552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 3 (3.197288598s)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:39:54.938538   25708 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:39:54.938766   25708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:39:54.938775   25708 out.go:304] Setting ErrFile to fd 2...
	I0416 16:39:54.938779   25708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:39:54.938934   25708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:39:54.939136   25708 out.go:298] Setting JSON to false
	I0416 16:39:54.939167   25708 mustload.go:65] Loading cluster: ha-543552
	I0416 16:39:54.939293   25708 notify.go:220] Checking for updates...
	I0416 16:39:54.939511   25708 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:39:54.939527   25708 status.go:255] checking status of ha-543552 ...
	I0416 16:39:54.939930   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:54.939979   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:54.955558   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35825
	I0416 16:39:54.956242   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:54.956810   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:54.956832   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:54.957296   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:54.957484   25708 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:39:54.959064   25708 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:39:54.959079   25708 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:39:54.959383   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:54.959414   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:54.973382   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0416 16:39:54.973744   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:54.974147   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:54.974170   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:54.974452   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:54.974619   25708 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:39:54.976945   25708 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:54.977294   25708 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:39:54.977317   25708 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:54.977452   25708 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:39:54.977717   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:54.977751   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:54.991668   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I0416 16:39:54.992180   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:54.992771   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:54.992792   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:54.993137   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:54.993316   25708 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:39:54.993509   25708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:54.993530   25708 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:39:54.996025   25708 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:54.996431   25708 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:39:54.996462   25708 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:54.996534   25708 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:39:54.996775   25708 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:39:54.996924   25708 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:39:54.997042   25708 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:39:55.082026   25708 ssh_runner.go:195] Run: systemctl --version
	I0416 16:39:55.088865   25708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:39:55.108735   25708 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:39:55.108801   25708 api_server.go:166] Checking apiserver status ...
	I0416 16:39:55.108915   25708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:55.128271   25708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:39:55.140459   25708 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:55.140492   25708 ssh_runner.go:195] Run: ls
	I0416 16:39:55.146085   25708 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:39:55.150455   25708 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:39:55.150476   25708 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:39:55.150489   25708 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:39:55.150523   25708 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:39:55.150814   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:55.150854   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:55.165046   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0416 16:39:55.165426   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:55.165958   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:55.165983   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:55.166360   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:55.166622   25708 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:39:55.168225   25708 status.go:330] ha-543552-m02 host status = "Running" (err=<nil>)
	I0416 16:39:55.168244   25708 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:39:55.168503   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:55.168538   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:55.182238   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0416 16:39:55.182600   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:55.183008   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:55.183030   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:55.183387   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:55.183550   25708 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:39:55.186009   25708 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:55.186374   25708 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:39:55.186404   25708 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:55.186508   25708 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:39:55.186886   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:55.186927   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:55.200690   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0416 16:39:55.201061   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:55.201483   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:55.201504   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:55.201866   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:55.202035   25708 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:39:55.202206   25708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:55.202228   25708 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:39:55.204890   25708 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:55.205325   25708 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:39:55.205348   25708 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:55.205466   25708 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:39:55.205626   25708 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:39:55.205775   25708 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:39:55.205870   25708 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	W0416 16:39:57.729078   25708 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:39:57.729167   25708 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0416 16:39:57.729181   25708 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:39:57.729188   25708 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 16:39:57.729211   25708 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:39:57.729222   25708 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:39:57.729634   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:57.729685   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:57.744302   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0416 16:39:57.744668   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:57.745190   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:57.745216   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:57.745512   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:57.745726   25708 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:39:57.747397   25708 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:39:57.747415   25708 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:39:57.747826   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:57.747862   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:57.761475   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44919
	I0416 16:39:57.761808   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:57.762219   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:57.762241   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:57.762540   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:57.762733   25708 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:39:57.765273   25708 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:57.765609   25708 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:39:57.765652   25708 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:57.765933   25708 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:39:57.766192   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:57.766224   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:57.780656   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0416 16:39:57.781056   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:57.781444   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:57.781465   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:57.781755   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:57.781936   25708 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:39:57.782121   25708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:57.782140   25708 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:39:57.784372   25708 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:57.784806   25708 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:39:57.784831   25708 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:39:57.784969   25708 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:39:57.785120   25708 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:39:57.785288   25708 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:39:57.785388   25708 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:39:57.865478   25708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:39:57.880641   25708 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:39:57.880668   25708 api_server.go:166] Checking apiserver status ...
	I0416 16:39:57.880707   25708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:57.894477   25708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:39:57.905028   25708 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:57.905078   25708 ssh_runner.go:195] Run: ls
	I0416 16:39:57.912903   25708 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:39:57.919101   25708 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:39:57.919126   25708 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:39:57.919136   25708 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:39:57.919155   25708 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:39:57.919509   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:57.919549   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:57.936311   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I0416 16:39:57.936757   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:57.937230   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:57.937252   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:57.937506   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:57.937682   25708 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:39:57.939140   25708 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:39:57.939168   25708 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:39:57.939505   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:57.939547   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:57.954041   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0416 16:39:57.954430   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:57.954867   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:57.954889   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:57.955230   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:57.955429   25708 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:39:57.958410   25708 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:57.958832   25708 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:39:57.958867   25708 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:57.958966   25708 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:39:57.959246   25708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:57.959287   25708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:57.973845   25708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37417
	I0416 16:39:57.974264   25708 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:57.974700   25708 main.go:141] libmachine: Using API Version  1
	I0416 16:39:57.974751   25708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:57.975028   25708 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:57.975210   25708 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:39:57.975524   25708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:57.975545   25708 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:39:57.978115   25708 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:57.978481   25708 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:39:57.978523   25708 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:39:57.978589   25708 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:39:57.978740   25708 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:39:57.978906   25708 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:39:57.979069   25708 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:39:58.065447   25708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:39:58.081620   25708 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 3 (4.999366654s)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:39:59.281020   25809 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:39:59.281278   25809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:39:59.281287   25809 out.go:304] Setting ErrFile to fd 2...
	I0416 16:39:59.281291   25809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:39:59.281561   25809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:39:59.281728   25809 out.go:298] Setting JSON to false
	I0416 16:39:59.281752   25809 mustload.go:65] Loading cluster: ha-543552
	I0416 16:39:59.281854   25809 notify.go:220] Checking for updates...
	I0416 16:39:59.282255   25809 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:39:59.282276   25809 status.go:255] checking status of ha-543552 ...
	I0416 16:39:59.282849   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:59.282895   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:59.299348   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37381
	I0416 16:39:59.299695   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:59.300300   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:39:59.300339   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:59.300662   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:59.300872   25809 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:39:59.302428   25809 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:39:59.302442   25809 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:39:59.302712   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:59.302749   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:59.316919   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0416 16:39:59.317375   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:59.317943   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:39:59.317967   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:59.318243   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:59.318394   25809 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:39:59.321101   25809 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:59.321529   25809 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:39:59.321578   25809 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:59.321712   25809 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:39:59.322075   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:59.322123   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:59.338973   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37969
	I0416 16:39:59.339492   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:59.340021   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:39:59.340048   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:59.340409   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:59.340628   25809 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:39:59.340863   25809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:59.340891   25809 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:39:59.344285   25809 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:59.344830   25809 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:39:59.344902   25809 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:39:59.345131   25809 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:39:59.345293   25809 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:39:59.345427   25809 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:39:59.345562   25809 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:39:59.433469   25809 ssh_runner.go:195] Run: systemctl --version
	I0416 16:39:59.440009   25809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:39:59.465229   25809 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:39:59.465259   25809 api_server.go:166] Checking apiserver status ...
	I0416 16:39:59.465295   25809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:39:59.485236   25809 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:39:59.497121   25809 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:39:59.497169   25809 ssh_runner.go:195] Run: ls
	I0416 16:39:59.502032   25809 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:39:59.507666   25809 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:39:59.507683   25809 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:39:59.507691   25809 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:39:59.507710   25809 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:39:59.507972   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:59.508036   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:59.523076   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35983
	I0416 16:39:59.523511   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:59.524008   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:39:59.524030   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:59.524668   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:59.525931   25809 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:39:59.527401   25809 status.go:330] ha-543552-m02 host status = "Running" (err=<nil>)
	I0416 16:39:59.527422   25809 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:39:59.527696   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:59.527736   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:59.543095   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0416 16:39:59.543412   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:59.543821   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:39:59.543839   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:59.544134   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:59.544318   25809 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:39:59.547191   25809 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:59.547627   25809 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:39:59.547655   25809 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:59.547804   25809 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:39:59.548061   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:39:59.548100   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:39:59.561688   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0416 16:39:59.562158   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:39:59.562667   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:39:59.562689   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:39:59.563007   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:39:59.563180   25809 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:39:59.563373   25809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:39:59.563396   25809 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:39:59.565812   25809 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:59.566194   25809 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:39:59.566224   25809 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:39:59.566402   25809 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:39:59.566569   25809 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:39:59.566739   25809 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:39:59.566905   25809 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	W0416 16:40:00.801120   25809 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:00.801189   25809 retry.go:31] will retry after 306.650725ms: dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:40:03.873122   25809 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:40:03.873236   25809 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0416 16:40:03.873264   25809 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:03.873277   25809 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 16:40:03.873299   25809 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:03.873309   25809 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:40:03.873595   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:03.873633   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:03.888339   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0416 16:40:03.888756   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:03.889237   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:40:03.889264   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:03.889573   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:03.889797   25809 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:40:03.891386   25809 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:40:03.891398   25809 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:03.891659   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:03.891691   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:03.906381   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I0416 16:40:03.906750   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:03.907235   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:40:03.907261   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:03.907557   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:03.907755   25809 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:40:03.910306   25809 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:03.910707   25809 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:03.910734   25809 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:03.910864   25809 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:03.911186   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:03.911222   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:03.925891   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0416 16:40:03.926324   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:03.926757   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:40:03.926777   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:03.927057   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:03.927263   25809 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:40:03.927422   25809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:03.927438   25809 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:40:03.930150   25809 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:03.930576   25809 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:03.930607   25809 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:03.930739   25809 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:40:03.930899   25809 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:40:03.931048   25809 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:40:03.931210   25809 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:40:04.009657   25809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:04.030197   25809 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:04.030229   25809 api_server.go:166] Checking apiserver status ...
	I0416 16:40:04.030272   25809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:04.045376   25809 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:40:04.056079   25809 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:04.056117   25809 ssh_runner.go:195] Run: ls
	I0416 16:40:04.061229   25809 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:04.067845   25809 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:04.067870   25809 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:40:04.067881   25809 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:04.067899   25809 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:40:04.068303   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:04.068352   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:04.082586   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38337
	I0416 16:40:04.083006   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:04.083427   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:40:04.083447   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:04.083744   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:04.083918   25809 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:04.085472   25809 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:40:04.085489   25809 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:04.085776   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:04.085812   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:04.099675   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42345
	I0416 16:40:04.100036   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:04.100433   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:40:04.100453   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:04.100746   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:04.100955   25809 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:40:04.103322   25809 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:04.103722   25809 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:04.103748   25809 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:04.103891   25809 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:04.104234   25809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:04.104275   25809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:04.118553   25809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0416 16:40:04.118913   25809 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:04.119330   25809 main.go:141] libmachine: Using API Version  1
	I0416 16:40:04.119349   25809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:04.119622   25809 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:04.119816   25809 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:40:04.119997   25809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:04.120019   25809 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:40:04.122450   25809 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:04.122822   25809 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:04.122852   25809 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:04.122927   25809 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:40:04.123064   25809 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:40:04.123211   25809 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:40:04.123367   25809 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:40:04.208713   25809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:04.225036   25809 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 3 (5.072847325s)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:40:05.354582   25909 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:40:05.354818   25909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:05.354829   25909 out.go:304] Setting ErrFile to fd 2...
	I0416 16:40:05.354834   25909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:05.355085   25909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:40:05.355282   25909 out.go:298] Setting JSON to false
	I0416 16:40:05.355311   25909 mustload.go:65] Loading cluster: ha-543552
	I0416 16:40:05.355418   25909 notify.go:220] Checking for updates...
	I0416 16:40:05.356106   25909 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:40:05.356132   25909 status.go:255] checking status of ha-543552 ...
	I0416 16:40:05.357054   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:05.357244   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:05.372877   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35369
	I0416 16:40:05.373230   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:05.373824   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:05.373851   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:05.374247   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:05.374434   25909 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:40:05.375976   25909 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:40:05.375992   25909 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:05.376386   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:05.376428   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:05.390978   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34343
	I0416 16:40:05.391339   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:05.391798   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:05.391823   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:05.392104   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:05.392304   25909 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:40:05.394777   25909 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:05.395208   25909 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:05.395242   25909 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:05.395359   25909 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:05.395794   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:05.395845   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:05.410452   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0416 16:40:05.410890   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:05.411374   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:05.411408   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:05.411767   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:05.411943   25909 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:40:05.412115   25909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:05.412138   25909 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:40:05.414939   25909 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:05.415349   25909 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:05.415386   25909 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:05.415529   25909 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:40:05.415721   25909 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:40:05.415880   25909 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:40:05.416025   25909 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:40:05.501068   25909 ssh_runner.go:195] Run: systemctl --version
	I0416 16:40:05.507552   25909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:05.522714   25909 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:05.522742   25909 api_server.go:166] Checking apiserver status ...
	I0416 16:40:05.522772   25909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:05.537131   25909 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:40:05.546792   25909 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:05.546842   25909 ssh_runner.go:195] Run: ls
	I0416 16:40:05.551510   25909 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:05.558261   25909 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:05.558284   25909 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:40:05.558296   25909 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:05.558315   25909 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:40:05.558655   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:05.558691   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:05.573834   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0416 16:40:05.574289   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:05.574798   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:05.574818   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:05.575115   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:05.575326   25909 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:40:05.577089   25909 status.go:330] ha-543552-m02 host status = "Running" (err=<nil>)
	I0416 16:40:05.577105   25909 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:40:05.577414   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:05.577453   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:05.592806   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0416 16:40:05.593280   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:05.593839   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:05.593859   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:05.594210   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:05.594398   25909 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:40:05.597108   25909 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:05.597480   25909 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:40:05.597516   25909 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:05.597658   25909 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:40:05.597962   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:05.598005   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:05.613034   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41149
	I0416 16:40:05.613426   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:05.613883   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:05.613907   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:05.614208   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:05.614358   25909 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:40:05.614554   25909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:05.614572   25909 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:40:05.617276   25909 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:05.617655   25909 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:40:05.617674   25909 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:05.617851   25909 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:40:05.618052   25909 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:40:05.618238   25909 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:40:05.618402   25909 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	W0416 16:40:06.949094   25909 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:06.949158   25909 retry.go:31] will retry after 300.976362ms: dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:40:10.021120   25909 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:40:10.021229   25909 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0416 16:40:10.021249   25909 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:10.021255   25909 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 16:40:10.021281   25909 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:10.021288   25909 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:40:10.021570   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:10.021610   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:10.038579   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I0416 16:40:10.039045   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:10.039522   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:10.039547   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:10.039858   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:10.040053   25909 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:40:10.041643   25909 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:40:10.041662   25909 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:10.041971   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:10.042009   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:10.056058   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0416 16:40:10.056447   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:10.056965   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:10.056986   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:10.057402   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:10.057607   25909 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:40:10.060124   25909 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:10.060532   25909 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:10.060566   25909 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:10.060712   25909 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:10.061188   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:10.061241   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:10.075085   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44643
	I0416 16:40:10.075456   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:10.075873   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:10.075898   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:10.076161   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:10.076330   25909 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:40:10.076488   25909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:10.076504   25909 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:40:10.078763   25909 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:10.079152   25909 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:10.079186   25909 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:10.079394   25909 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:40:10.079561   25909 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:40:10.079719   25909 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:40:10.079882   25909 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:40:10.165123   25909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:10.181667   25909 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:10.181697   25909 api_server.go:166] Checking apiserver status ...
	I0416 16:40:10.181735   25909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:10.198644   25909 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:40:10.209723   25909 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:10.209762   25909 ssh_runner.go:195] Run: ls
	I0416 16:40:10.215023   25909 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:10.219319   25909 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:10.219335   25909 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:40:10.219342   25909 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:10.219355   25909 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:40:10.219687   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:10.219725   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:10.234025   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0416 16:40:10.234409   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:10.234865   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:10.234895   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:10.235244   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:10.235424   25909 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:10.236924   25909 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:40:10.236945   25909 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:10.237231   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:10.237265   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:10.251452   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0416 16:40:10.251802   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:10.252252   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:10.252272   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:10.252580   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:10.252782   25909 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:40:10.255637   25909 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:10.256155   25909 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:10.256182   25909 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:10.256314   25909 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:10.256589   25909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:10.256622   25909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:10.270758   25909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0416 16:40:10.271155   25909 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:10.271583   25909 main.go:141] libmachine: Using API Version  1
	I0416 16:40:10.271602   25909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:10.271896   25909 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:10.272103   25909 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:40:10.272291   25909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:10.272310   25909 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:40:10.274879   25909 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:10.275260   25909 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:10.275279   25909 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:10.275423   25909 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:40:10.275578   25909 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:40:10.275754   25909 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:40:10.275876   25909 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:40:10.360608   25909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:10.376094   25909 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 3 (4.915109555s)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:40:11.662344   26026 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:40:11.662462   26026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:11.662472   26026 out.go:304] Setting ErrFile to fd 2...
	I0416 16:40:11.662476   26026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:11.662678   26026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:40:11.662839   26026 out.go:298] Setting JSON to false
	I0416 16:40:11.662865   26026 mustload.go:65] Loading cluster: ha-543552
	I0416 16:40:11.662975   26026 notify.go:220] Checking for updates...
	I0416 16:40:11.663233   26026 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:40:11.663247   26026 status.go:255] checking status of ha-543552 ...
	I0416 16:40:11.663588   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:11.663642   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:11.680145   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0416 16:40:11.680719   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:11.681249   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:11.681271   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:11.681642   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:11.681825   26026 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:40:11.683329   26026 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:40:11.683344   26026 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:11.683663   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:11.683700   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:11.698354   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42977
	I0416 16:40:11.698730   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:11.699266   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:11.699301   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:11.699590   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:11.699815   26026 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:40:11.702536   26026 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:11.703005   26026 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:11.703032   26026 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:11.703169   26026 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:11.703460   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:11.703527   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:11.718454   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35385
	I0416 16:40:11.718801   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:11.719260   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:11.719282   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:11.719589   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:11.719816   26026 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:40:11.719994   26026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:11.720020   26026 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:40:11.722556   26026 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:11.723018   26026 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:11.723047   26026 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:11.723235   26026 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:40:11.723413   26026 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:40:11.723588   26026 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:40:11.723794   26026 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:40:11.809935   26026 ssh_runner.go:195] Run: systemctl --version
	I0416 16:40:11.817127   26026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:11.841040   26026 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:11.841074   26026 api_server.go:166] Checking apiserver status ...
	I0416 16:40:11.841113   26026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:11.858064   26026 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:40:11.868551   26026 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:11.868593   26026 ssh_runner.go:195] Run: ls
	I0416 16:40:11.874996   26026 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:11.880536   26026 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:11.880563   26026 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:40:11.880581   26026 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:11.880598   26026 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:40:11.880990   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:11.881086   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:11.896357   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0416 16:40:11.896797   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:11.897681   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:11.897743   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:11.898910   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:11.899153   26026 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:40:11.900703   26026 status.go:330] ha-543552-m02 host status = "Running" (err=<nil>)
	I0416 16:40:11.900723   26026 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:40:11.901156   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:11.901203   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:11.915822   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43241
	I0416 16:40:11.916179   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:11.916594   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:11.916618   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:11.916941   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:11.917125   26026 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:40:11.919567   26026 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:11.919982   26026 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:40:11.920010   26026 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:11.920151   26026 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:40:11.920456   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:11.920499   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:11.934704   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0416 16:40:11.935038   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:11.935482   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:11.935504   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:11.935812   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:11.935998   26026 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:40:11.936208   26026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:11.936228   26026 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:40:11.939105   26026 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:11.939542   26026 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:40:11.939571   26026 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:11.939715   26026 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:40:11.939865   26026 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:40:11.940045   26026 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:40:11.940210   26026 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	W0416 16:40:13.089129   26026 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:13.089195   26026 retry.go:31] will retry after 171.996143ms: dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:40:16.161145   26026 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:40:16.161264   26026 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0416 16:40:16.161283   26026 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:16.161293   26026 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 16:40:16.161321   26026 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:16.161331   26026 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:40:16.161635   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:16.161674   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:16.176383   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0416 16:40:16.176782   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:16.177225   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:16.177248   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:16.177549   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:16.177730   26026 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:40:16.179209   26026 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:40:16.179226   26026 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:16.179566   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:16.179610   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:16.193436   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0416 16:40:16.193790   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:16.194209   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:16.194230   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:16.194548   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:16.194725   26026 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:40:16.197584   26026 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:16.197976   26026 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:16.198005   26026 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:16.198164   26026 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:16.198444   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:16.198481   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:16.212170   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0416 16:40:16.212510   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:16.212957   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:16.212980   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:16.213262   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:16.213450   26026 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:40:16.213661   26026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:16.213684   26026 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:40:16.216210   26026 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:16.216798   26026 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:16.216828   26026 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:16.217002   26026 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:40:16.217158   26026 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:40:16.217290   26026 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:40:16.217398   26026 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:40:16.301425   26026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:16.322273   26026 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:16.322296   26026 api_server.go:166] Checking apiserver status ...
	I0416 16:40:16.322328   26026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:16.339468   26026 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:40:16.352375   26026 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:16.352414   26026 ssh_runner.go:195] Run: ls
	I0416 16:40:16.357399   26026 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:16.361802   26026 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:16.361827   26026 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:40:16.361838   26026 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:16.361857   26026 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:40:16.362168   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:16.362217   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:16.377762   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0416 16:40:16.378181   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:16.378659   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:16.378681   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:16.378945   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:16.379128   26026 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:16.380700   26026 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:40:16.380716   26026 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:16.381012   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:16.381051   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:16.395429   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0416 16:40:16.395770   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:16.396219   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:16.396238   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:16.396540   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:16.396745   26026 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:40:16.399319   26026 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:16.399718   26026 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:16.399753   26026 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:16.399841   26026 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:16.400109   26026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:16.400158   26026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:16.413944   26026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38223
	I0416 16:40:16.414334   26026 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:16.414764   26026 main.go:141] libmachine: Using API Version  1
	I0416 16:40:16.414783   26026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:16.415061   26026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:16.415215   26026 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:40:16.415390   26026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:16.415409   26026 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:40:16.418045   26026 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:16.418480   26026 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:16.418508   26026 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:16.418655   26026 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:40:16.418823   26026 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:40:16.418951   26026 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:40:16.419091   26026 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:40:16.505029   26026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:16.522091   26026 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 3 (3.744270275s)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:40:20.593317   26126 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:40:20.593445   26126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:20.593455   26126 out.go:304] Setting ErrFile to fd 2...
	I0416 16:40:20.593459   26126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:20.593609   26126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:40:20.593800   26126 out.go:298] Setting JSON to false
	I0416 16:40:20.593833   26126 mustload.go:65] Loading cluster: ha-543552
	I0416 16:40:20.593959   26126 notify.go:220] Checking for updates...
	I0416 16:40:20.594230   26126 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:40:20.594247   26126 status.go:255] checking status of ha-543552 ...
	I0416 16:40:20.594702   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:20.594758   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:20.610191   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41307
	I0416 16:40:20.610573   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:20.611077   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:20.611100   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:20.611476   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:20.611680   26126 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:40:20.613391   26126 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:40:20.613409   26126 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:20.613789   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:20.613828   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:20.629091   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0416 16:40:20.629499   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:20.629950   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:20.629978   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:20.630257   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:20.630463   26126 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:40:20.632886   26126 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:20.633399   26126 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:20.633421   26126 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:20.633566   26126 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:20.633817   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:20.633852   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:20.647996   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34361
	I0416 16:40:20.648366   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:20.648785   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:20.648806   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:20.649166   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:20.649360   26126 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:40:20.649559   26126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:20.649589   26126 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:40:20.652166   26126 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:20.652600   26126 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:20.652626   26126 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:20.652774   26126 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:40:20.652980   26126 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:40:20.653137   26126 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:40:20.653297   26126 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:40:20.737481   26126 ssh_runner.go:195] Run: systemctl --version
	I0416 16:40:20.744149   26126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:20.758928   26126 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:20.758963   26126 api_server.go:166] Checking apiserver status ...
	I0416 16:40:20.759016   26126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:20.773300   26126 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:40:20.783572   26126 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:20.783640   26126 ssh_runner.go:195] Run: ls
	I0416 16:40:20.788594   26126 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:20.795825   26126 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:20.795846   26126 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:40:20.795856   26126 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:20.795870   26126 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:40:20.796211   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:20.796263   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:20.811622   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0416 16:40:20.812040   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:20.812479   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:20.812503   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:20.812792   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:20.812983   26126 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:40:20.814574   26126 status.go:330] ha-543552-m02 host status = "Running" (err=<nil>)
	I0416 16:40:20.814588   26126 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:40:20.814853   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:20.814882   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:20.829191   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0416 16:40:20.829541   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:20.829926   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:20.829956   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:20.830273   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:20.830444   26126 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:40:20.833178   26126 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:20.833622   26126 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:40:20.833651   26126 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:20.833773   26126 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:40:20.834036   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:20.834071   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:20.847828   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0416 16:40:20.848209   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:20.848635   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:20.848653   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:20.848997   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:20.849227   26126 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:40:20.849417   26126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:20.849442   26126 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:40:20.851928   26126 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:20.852446   26126 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:40:20.852473   26126 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:40:20.852583   26126 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:40:20.852773   26126 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:40:20.852930   26126 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:40:20.853063   26126 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	W0416 16:40:23.905075   26126 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	W0416 16:40:23.905172   26126 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0416 16:40:23.905195   26126 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:23.905208   26126 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 16:40:23.905225   26126 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0416 16:40:23.905231   26126 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:40:23.905538   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:23.905580   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:23.920149   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0416 16:40:23.920562   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:23.921237   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:23.921273   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:23.921588   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:23.921789   26126 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:40:23.923160   26126 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:40:23.923177   26126 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:23.923482   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:23.923519   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:23.937794   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35359
	I0416 16:40:23.938177   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:23.938650   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:23.938681   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:23.938977   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:23.939162   26126 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:40:23.941681   26126 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:23.942053   26126 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:23.942088   26126 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:23.942243   26126 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:23.942513   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:23.942552   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:23.957031   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40491
	I0416 16:40:23.957475   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:23.957917   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:23.957939   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:23.958299   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:23.958459   26126 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:40:23.958610   26126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:23.958632   26126 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:40:23.961327   26126 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:23.961756   26126 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:23.961779   26126 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:23.961912   26126 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:40:23.962075   26126 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:40:23.962234   26126 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:40:23.962381   26126 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:40:24.045181   26126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:24.063953   26126 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:24.063980   26126 api_server.go:166] Checking apiserver status ...
	I0416 16:40:24.064010   26126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:24.079984   26126 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:40:24.103393   26126 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:24.103455   26126 ssh_runner.go:195] Run: ls
	I0416 16:40:24.109253   26126 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:24.116381   26126 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:24.116406   26126 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:40:24.116417   26126 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:24.116435   26126 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:40:24.116776   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:24.116815   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:24.131063   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0416 16:40:24.131430   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:24.131858   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:24.131877   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:24.132185   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:24.132344   26126 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:24.133968   26126 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:40:24.133986   26126 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:24.134297   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:24.134344   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:24.148985   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0416 16:40:24.149380   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:24.149852   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:24.149875   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:24.150234   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:24.150401   26126 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:40:24.153109   26126 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:24.153495   26126 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:24.153525   26126 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:24.153641   26126 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:24.153922   26126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:24.153964   26126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:24.169971   26126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0416 16:40:24.170349   26126 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:24.170805   26126 main.go:141] libmachine: Using API Version  1
	I0416 16:40:24.170829   26126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:24.171208   26126 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:24.171413   26126 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:40:24.171645   26126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:24.171668   26126 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:40:24.174326   26126 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:24.174738   26126 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:24.174785   26126 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:24.174852   26126 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:40:24.175051   26126 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:40:24.175186   26126 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:40:24.175366   26126 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:40:24.261795   26126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:24.280328   26126 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 7 (961.833581ms)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:40:31.210265   26242 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:40:31.210368   26242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:31.210382   26242 out.go:304] Setting ErrFile to fd 2...
	I0416 16:40:31.210389   26242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:31.210627   26242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:40:31.210852   26242 out.go:298] Setting JSON to false
	I0416 16:40:31.210881   26242 mustload.go:65] Loading cluster: ha-543552
	I0416 16:40:31.210920   26242 notify.go:220] Checking for updates...
	I0416 16:40:31.211348   26242 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:40:31.211364   26242 status.go:255] checking status of ha-543552 ...
	I0416 16:40:31.211827   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.211890   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.230235   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0416 16:40:31.230610   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.231178   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.231206   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.231584   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.231805   26242 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:40:31.233180   26242 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:40:31.233202   26242 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:31.233515   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.233553   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.247445   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
	I0416 16:40:31.247848   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.248272   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.248299   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.248645   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.248832   26242 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:40:31.251274   26242 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:31.251685   26242 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:31.251719   26242 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:31.251829   26242 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:31.252114   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.252145   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.266024   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0416 16:40:31.266574   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.267099   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.267122   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.267486   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.267716   26242 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:40:31.267915   26242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:31.267947   26242 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:40:31.271210   26242 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:31.271693   26242 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:31.271727   26242 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:31.271857   26242 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:40:31.272057   26242 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:40:31.272225   26242 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:40:31.272399   26242 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:40:31.357857   26242 ssh_runner.go:195] Run: systemctl --version
	I0416 16:40:31.365999   26242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:31.382891   26242 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:31.382923   26242 api_server.go:166] Checking apiserver status ...
	I0416 16:40:31.382954   26242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:31.399295   26242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:40:31.424819   26242 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:31.424877   26242 ssh_runner.go:195] Run: ls
	I0416 16:40:31.435225   26242 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:31.442492   26242 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:31.442516   26242 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:40:31.442527   26242 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:31.442548   26242 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:40:31.442815   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.442849   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.458099   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0416 16:40:31.458441   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.458935   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.458956   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.459337   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.459525   26242 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:40:31.757089   26242 status.go:330] ha-543552-m02 host status = "Stopped" (err=<nil>)
	I0416 16:40:31.757110   26242 status.go:343] host is not running, skipping remaining checks
	I0416 16:40:31.757116   26242 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:31.757141   26242 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:40:31.757423   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.757500   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.772164   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0416 16:40:31.772554   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.773046   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.773073   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.773396   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.773585   26242 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:40:31.775136   26242 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:40:31.775151   26242 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:31.775416   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.775451   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.789929   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0416 16:40:31.790322   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.790764   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.790782   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.791087   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.791296   26242 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:40:31.793967   26242 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:31.794336   26242 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:31.794362   26242 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:31.794666   26242 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:31.795173   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.795213   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.810271   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
	I0416 16:40:31.810710   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.811245   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.811270   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.811668   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.811856   26242 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:40:31.812057   26242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:31.812080   26242 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:40:31.814927   26242 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:31.815385   26242 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:31.815421   26242 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:31.815558   26242 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:40:31.815755   26242 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:40:31.815887   26242 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:40:31.816026   26242 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:40:31.893262   26242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:31.912232   26242 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:31.912259   26242 api_server.go:166] Checking apiserver status ...
	I0416 16:40:31.912288   26242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:31.927964   26242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:40:31.939384   26242 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:31.939430   26242 ssh_runner.go:195] Run: ls
	I0416 16:40:31.945362   26242 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:31.949804   26242 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:31.949833   26242 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:40:31.949841   26242 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:31.949855   26242 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:40:31.950199   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.950238   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.966371   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0416 16:40:31.966742   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.967290   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.967311   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.967606   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.967805   26242 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:31.969326   26242 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:40:31.969340   26242 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:31.969597   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.969627   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:31.984386   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0416 16:40:31.984871   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:31.985365   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:31.985395   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:31.985755   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:31.985947   26242 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:40:31.989141   26242 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:31.989668   26242 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:31.989694   26242 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:31.989830   26242 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:31.990150   26242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:31.990187   26242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:32.004777   26242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0416 16:40:32.005183   26242 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:32.005678   26242 main.go:141] libmachine: Using API Version  1
	I0416 16:40:32.005697   26242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:32.005990   26242 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:32.006170   26242 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:40:32.006354   26242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:32.006375   26242 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:40:32.008810   26242 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:32.009252   26242 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:32.009281   26242 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:32.009402   26242 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:40:32.009573   26242 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:40:32.009720   26242 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:40:32.009853   26242 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:40:32.097416   26242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:32.114084   26242 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 7 (663.672444ms)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-543552-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:40:40.082110   26368 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:40:40.082214   26368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:40.082222   26368 out.go:304] Setting ErrFile to fd 2...
	I0416 16:40:40.082226   26368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:40.082421   26368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:40:40.082581   26368 out.go:298] Setting JSON to false
	I0416 16:40:40.082605   26368 mustload.go:65] Loading cluster: ha-543552
	I0416 16:40:40.082719   26368 notify.go:220] Checking for updates...
	I0416 16:40:40.082964   26368 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:40:40.082978   26368 status.go:255] checking status of ha-543552 ...
	I0416 16:40:40.083313   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.083370   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.099783   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33415
	I0416 16:40:40.100240   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.100972   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.100995   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.101498   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.101765   26368 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:40:40.103643   26368 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:40:40.103659   26368 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:40.103982   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.104029   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.119581   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0416 16:40:40.120063   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.120531   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.120558   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.120865   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.121039   26368 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:40:40.123627   26368 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:40.124047   26368 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:40.124102   26368 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:40.124138   26368 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:40:40.124530   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.124572   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.139189   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0416 16:40:40.139690   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.140178   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.140203   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.140476   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.140662   26368 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:40:40.140831   26368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:40.140866   26368 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:40:40.143451   26368 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:40.143792   26368 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:40:40.143826   26368 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:40:40.143933   26368 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:40:40.144096   26368 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:40:40.144278   26368 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:40:40.144416   26368 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:40:40.230017   26368 ssh_runner.go:195] Run: systemctl --version
	I0416 16:40:40.243691   26368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:40.264106   26368 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:40.264140   26368 api_server.go:166] Checking apiserver status ...
	I0416 16:40:40.264170   26368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:40.280918   26368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W0416 16:40:40.292911   26368 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:40.292970   26368 ssh_runner.go:195] Run: ls
	I0416 16:40:40.306764   26368 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:40.313926   26368 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:40.313955   26368 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:40:40.313965   26368 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:40.313982   26368 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:40:40.314378   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.314419   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.328875   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I0416 16:40:40.329354   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.329803   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.329823   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.330125   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.330324   26368 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:40:40.331938   26368 status.go:330] ha-543552-m02 host status = "Stopped" (err=<nil>)
	I0416 16:40:40.331951   26368 status.go:343] host is not running, skipping remaining checks
	I0416 16:40:40.331956   26368 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:40.331973   26368 status.go:255] checking status of ha-543552-m03 ...
	I0416 16:40:40.332349   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.332394   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.347017   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0416 16:40:40.347471   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.347888   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.347914   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.348219   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.348407   26368 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:40:40.350088   26368 status.go:330] ha-543552-m03 host status = "Running" (err=<nil>)
	I0416 16:40:40.350101   26368 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:40.350381   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.350412   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.365250   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0416 16:40:40.365632   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.366070   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.366096   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.366452   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.366643   26368 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:40:40.369479   26368 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:40.369866   26368 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:40.369895   26368 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:40.370017   26368 host.go:66] Checking if "ha-543552-m03" exists ...
	I0416 16:40:40.370369   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.370402   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.385108   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0416 16:40:40.385526   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.386033   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.386071   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.386377   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.386570   26368 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:40:40.386735   26368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:40.386757   26368 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:40:40.389561   26368 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:40.389975   26368 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:40.390000   26368 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:40.390151   26368 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:40:40.390329   26368 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:40:40.390502   26368 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:40:40.390645   26368 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:40:40.471404   26368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:40.488081   26368 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:40:40.488110   26368 api_server.go:166] Checking apiserver status ...
	I0416 16:40:40.488149   26368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:40:40.506287   26368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0416 16:40:40.517432   26368 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:40:40.517492   26368 ssh_runner.go:195] Run: ls
	I0416 16:40:40.523181   26368 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:40:40.527761   26368 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:40:40.527786   26368 status.go:422] ha-543552-m03 apiserver status = Running (err=<nil>)
	I0416 16:40:40.527794   26368 status.go:257] ha-543552-m03 status: &{Name:ha-543552-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:40:40.527809   26368 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:40:40.528175   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.528222   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.543503   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0416 16:40:40.543893   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.544439   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.544464   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.544760   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.544974   26368 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:40.546607   26368 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:40:40.546622   26368 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:40.546888   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.546926   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.562029   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38105
	I0416 16:40:40.562414   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.562887   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.562911   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.563244   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.563419   26368 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:40:40.566185   26368 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:40.566602   26368 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:40.566623   26368 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:40.566808   26368 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:40:40.567166   26368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:40.567208   26368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:40.583112   26368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36345
	I0416 16:40:40.583589   26368 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:40.584087   26368 main.go:141] libmachine: Using API Version  1
	I0416 16:40:40.584109   26368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:40.584426   26368 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:40.584600   26368 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:40:40.584752   26368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:40:40.584772   26368 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:40:40.587558   26368 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:40.587984   26368 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:40.588015   26368 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:40.588159   26368 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:40:40.588318   26368 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:40:40.588470   26368 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:40:40.588575   26368 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:40:40.673238   26368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:40:40.690396   26368 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-543552 -n ha-543552
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-543552 logs -n 25: (1.673487971s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552:/home/docker/cp-test_ha-543552-m03_ha-543552.txt                       |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552 sudo cat                                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552.txt                                 |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m02:/home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m04 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp testdata/cp-test.txt                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552:/home/docker/cp-test_ha-543552-m04_ha-543552.txt                       |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552 sudo cat                                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552.txt                                 |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m02:/home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03:/home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m03 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-543552 node stop m02 -v=7                                                     | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-543552 node start m02 -v=7                                                    | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:32:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:32:57.811851   20924 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:32:57.811977   20924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:57.811990   20924 out.go:304] Setting ErrFile to fd 2...
	I0416 16:32:57.811996   20924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:57.812199   20924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:32:57.812765   20924 out.go:298] Setting JSON to false
	I0416 16:32:57.813653   20924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":930,"bootTime":1713284248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:32:57.813708   20924 start.go:139] virtualization: kvm guest
	I0416 16:32:57.815973   20924 out.go:177] * [ha-543552] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:32:57.817513   20924 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:32:57.818968   20924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:32:57.817534   20924 notify.go:220] Checking for updates...
	I0416 16:32:57.821609   20924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:32:57.823005   20924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:57.824387   20924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:32:57.825724   20924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:32:57.827100   20924 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:32:57.861189   20924 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 16:32:57.862626   20924 start.go:297] selected driver: kvm2
	I0416 16:32:57.862645   20924 start.go:901] validating driver "kvm2" against <nil>
	I0416 16:32:57.862665   20924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:32:57.863716   20924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:32:57.863810   20924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:32:57.878756   20924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:32:57.878800   20924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:32:57.878987   20924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:32:57.879047   20924 cni.go:84] Creating CNI manager for ""
	I0416 16:32:57.879060   20924 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0416 16:32:57.879064   20924 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 16:32:57.879111   20924 start.go:340] cluster config:
	{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0416 16:32:57.879198   20924 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:32:57.880865   20924 out.go:177] * Starting "ha-543552" primary control-plane node in "ha-543552" cluster
	I0416 16:32:57.881998   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:32:57.882031   20924 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 16:32:57.882037   20924 cache.go:56] Caching tarball of preloaded images
	I0416 16:32:57.882096   20924 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:32:57.882107   20924 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:32:57.882400   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:32:57.882418   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json: {Name:mkf68664e68f97a8237c738cfc5938b681c72c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:32:57.882548   20924 start.go:360] acquireMachinesLock for ha-543552: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:32:57.882584   20924 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "ha-543552"
	I0416 16:32:57.882601   20924 start.go:93] Provisioning new machine with config: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:32:57.882670   20924 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 16:32:57.884395   20924 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:32:57.884520   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:32:57.884553   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:32:57.898753   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0416 16:32:57.899136   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:32:57.899675   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:32:57.899695   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:32:57.900042   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:32:57.900224   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:32:57.900387   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:32:57.900547   20924 start.go:159] libmachine.API.Create for "ha-543552" (driver="kvm2")
	I0416 16:32:57.900575   20924 client.go:168] LocalClient.Create starting
	I0416 16:32:57.900607   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 16:32:57.900645   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:32:57.900659   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:32:57.900711   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 16:32:57.900729   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:32:57.900741   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:32:57.900754   20924 main.go:141] libmachine: Running pre-create checks...
	I0416 16:32:57.900771   20924 main.go:141] libmachine: (ha-543552) Calling .PreCreateCheck
	I0416 16:32:57.901115   20924 main.go:141] libmachine: (ha-543552) Calling .GetConfigRaw
	I0416 16:32:57.901499   20924 main.go:141] libmachine: Creating machine...
	I0416 16:32:57.901514   20924 main.go:141] libmachine: (ha-543552) Calling .Create
	I0416 16:32:57.901657   20924 main.go:141] libmachine: (ha-543552) Creating KVM machine...
	I0416 16:32:57.902958   20924 main.go:141] libmachine: (ha-543552) DBG | found existing default KVM network
	I0416 16:32:57.903590   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:57.903459   20947 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0416 16:32:57.903618   20924 main.go:141] libmachine: (ha-543552) DBG | created network xml: 
	I0416 16:32:57.903639   20924 main.go:141] libmachine: (ha-543552) DBG | <network>
	I0416 16:32:57.903667   20924 main.go:141] libmachine: (ha-543552) DBG |   <name>mk-ha-543552</name>
	I0416 16:32:57.903684   20924 main.go:141] libmachine: (ha-543552) DBG |   <dns enable='no'/>
	I0416 16:32:57.903694   20924 main.go:141] libmachine: (ha-543552) DBG |   
	I0416 16:32:57.903703   20924 main.go:141] libmachine: (ha-543552) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0416 16:32:57.903709   20924 main.go:141] libmachine: (ha-543552) DBG |     <dhcp>
	I0416 16:32:57.903718   20924 main.go:141] libmachine: (ha-543552) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0416 16:32:57.903756   20924 main.go:141] libmachine: (ha-543552) DBG |     </dhcp>
	I0416 16:32:57.903781   20924 main.go:141] libmachine: (ha-543552) DBG |   </ip>
	I0416 16:32:57.903802   20924 main.go:141] libmachine: (ha-543552) DBG |   
	I0416 16:32:57.903821   20924 main.go:141] libmachine: (ha-543552) DBG | </network>
	I0416 16:32:57.903840   20924 main.go:141] libmachine: (ha-543552) DBG | 
	I0416 16:32:57.908616   20924 main.go:141] libmachine: (ha-543552) DBG | trying to create private KVM network mk-ha-543552 192.168.39.0/24...
	I0416 16:32:57.972477   20924 main.go:141] libmachine: (ha-543552) DBG | private KVM network mk-ha-543552 192.168.39.0/24 created
	I0416 16:32:57.972507   20924 main.go:141] libmachine: (ha-543552) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552 ...
	I0416 16:32:57.972520   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:57.972440   20947 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:57.972560   20924 main.go:141] libmachine: (ha-543552) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:32:57.972598   20924 main.go:141] libmachine: (ha-543552) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:32:58.192119   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:58.191972   20947 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa...
	I0416 16:32:58.434619   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:58.434483   20947 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/ha-543552.rawdisk...
	I0416 16:32:58.434649   20924 main.go:141] libmachine: (ha-543552) DBG | Writing magic tar header
	I0416 16:32:58.434658   20924 main.go:141] libmachine: (ha-543552) DBG | Writing SSH key tar header
	I0416 16:32:58.434666   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:58.434593   20947 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552 ...
	I0416 16:32:58.434679   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552
	I0416 16:32:58.434750   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552 (perms=drwx------)
	I0416 16:32:58.434773   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 16:32:58.434781   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:32:58.434788   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:58.434811   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 16:32:58.434824   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:32:58.434838   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:32:58.434847   20924 main.go:141] libmachine: (ha-543552) DBG | Checking permissions on dir: /home
	I0416 16:32:58.434853   20924 main.go:141] libmachine: (ha-543552) DBG | Skipping /home - not owner
	I0416 16:32:58.434864   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 16:32:58.434876   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 16:32:58.434884   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:32:58.434894   20924 main.go:141] libmachine: (ha-543552) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:32:58.434906   20924 main.go:141] libmachine: (ha-543552) Creating domain...
	I0416 16:32:58.436047   20924 main.go:141] libmachine: (ha-543552) define libvirt domain using xml: 
	I0416 16:32:58.436060   20924 main.go:141] libmachine: (ha-543552) <domain type='kvm'>
	I0416 16:32:58.436066   20924 main.go:141] libmachine: (ha-543552)   <name>ha-543552</name>
	I0416 16:32:58.436071   20924 main.go:141] libmachine: (ha-543552)   <memory unit='MiB'>2200</memory>
	I0416 16:32:58.436076   20924 main.go:141] libmachine: (ha-543552)   <vcpu>2</vcpu>
	I0416 16:32:58.436091   20924 main.go:141] libmachine: (ha-543552)   <features>
	I0416 16:32:58.436097   20924 main.go:141] libmachine: (ha-543552)     <acpi/>
	I0416 16:32:58.436103   20924 main.go:141] libmachine: (ha-543552)     <apic/>
	I0416 16:32:58.436108   20924 main.go:141] libmachine: (ha-543552)     <pae/>
	I0416 16:32:58.436116   20924 main.go:141] libmachine: (ha-543552)     
	I0416 16:32:58.436121   20924 main.go:141] libmachine: (ha-543552)   </features>
	I0416 16:32:58.436128   20924 main.go:141] libmachine: (ha-543552)   <cpu mode='host-passthrough'>
	I0416 16:32:58.436133   20924 main.go:141] libmachine: (ha-543552)   
	I0416 16:32:58.436145   20924 main.go:141] libmachine: (ha-543552)   </cpu>
	I0416 16:32:58.436156   20924 main.go:141] libmachine: (ha-543552)   <os>
	I0416 16:32:58.436162   20924 main.go:141] libmachine: (ha-543552)     <type>hvm</type>
	I0416 16:32:58.436195   20924 main.go:141] libmachine: (ha-543552)     <boot dev='cdrom'/>
	I0416 16:32:58.436218   20924 main.go:141] libmachine: (ha-543552)     <boot dev='hd'/>
	I0416 16:32:58.436229   20924 main.go:141] libmachine: (ha-543552)     <bootmenu enable='no'/>
	I0416 16:32:58.436243   20924 main.go:141] libmachine: (ha-543552)   </os>
	I0416 16:32:58.436258   20924 main.go:141] libmachine: (ha-543552)   <devices>
	I0416 16:32:58.436272   20924 main.go:141] libmachine: (ha-543552)     <disk type='file' device='cdrom'>
	I0416 16:32:58.436285   20924 main.go:141] libmachine: (ha-543552)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/boot2docker.iso'/>
	I0416 16:32:58.436299   20924 main.go:141] libmachine: (ha-543552)       <target dev='hdc' bus='scsi'/>
	I0416 16:32:58.436314   20924 main.go:141] libmachine: (ha-543552)       <readonly/>
	I0416 16:32:58.436331   20924 main.go:141] libmachine: (ha-543552)     </disk>
	I0416 16:32:58.436346   20924 main.go:141] libmachine: (ha-543552)     <disk type='file' device='disk'>
	I0416 16:32:58.436360   20924 main.go:141] libmachine: (ha-543552)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:32:58.436378   20924 main.go:141] libmachine: (ha-543552)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/ha-543552.rawdisk'/>
	I0416 16:32:58.436390   20924 main.go:141] libmachine: (ha-543552)       <target dev='hda' bus='virtio'/>
	I0416 16:32:58.436401   20924 main.go:141] libmachine: (ha-543552)     </disk>
	I0416 16:32:58.436407   20924 main.go:141] libmachine: (ha-543552)     <interface type='network'>
	I0416 16:32:58.436420   20924 main.go:141] libmachine: (ha-543552)       <source network='mk-ha-543552'/>
	I0416 16:32:58.436436   20924 main.go:141] libmachine: (ha-543552)       <model type='virtio'/>
	I0416 16:32:58.436454   20924 main.go:141] libmachine: (ha-543552)     </interface>
	I0416 16:32:58.436469   20924 main.go:141] libmachine: (ha-543552)     <interface type='network'>
	I0416 16:32:58.436486   20924 main.go:141] libmachine: (ha-543552)       <source network='default'/>
	I0416 16:32:58.436499   20924 main.go:141] libmachine: (ha-543552)       <model type='virtio'/>
	I0416 16:32:58.436515   20924 main.go:141] libmachine: (ha-543552)     </interface>
	I0416 16:32:58.436530   20924 main.go:141] libmachine: (ha-543552)     <serial type='pty'>
	I0416 16:32:58.436542   20924 main.go:141] libmachine: (ha-543552)       <target port='0'/>
	I0416 16:32:58.436556   20924 main.go:141] libmachine: (ha-543552)     </serial>
	I0416 16:32:58.436573   20924 main.go:141] libmachine: (ha-543552)     <console type='pty'>
	I0416 16:32:58.436585   20924 main.go:141] libmachine: (ha-543552)       <target type='serial' port='0'/>
	I0416 16:32:58.436606   20924 main.go:141] libmachine: (ha-543552)     </console>
	I0416 16:32:58.436621   20924 main.go:141] libmachine: (ha-543552)     <rng model='virtio'>
	I0416 16:32:58.436635   20924 main.go:141] libmachine: (ha-543552)       <backend model='random'>/dev/random</backend>
	I0416 16:32:58.436648   20924 main.go:141] libmachine: (ha-543552)     </rng>
	I0416 16:32:58.436674   20924 main.go:141] libmachine: (ha-543552)     
	I0416 16:32:58.436693   20924 main.go:141] libmachine: (ha-543552)     
	I0416 16:32:58.436706   20924 main.go:141] libmachine: (ha-543552)   </devices>
	I0416 16:32:58.436719   20924 main.go:141] libmachine: (ha-543552) </domain>
	I0416 16:32:58.436731   20924 main.go:141] libmachine: (ha-543552) 
	I0416 16:32:58.441002   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:4a:90:dd in network default
	I0416 16:32:58.441610   20924 main.go:141] libmachine: (ha-543552) Ensuring networks are active...
	I0416 16:32:58.441639   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:32:58.442337   20924 main.go:141] libmachine: (ha-543552) Ensuring network default is active
	I0416 16:32:58.442644   20924 main.go:141] libmachine: (ha-543552) Ensuring network mk-ha-543552 is active
	I0416 16:32:58.443084   20924 main.go:141] libmachine: (ha-543552) Getting domain xml...
	I0416 16:32:58.443794   20924 main.go:141] libmachine: (ha-543552) Creating domain...
	I0416 16:32:59.616203   20924 main.go:141] libmachine: (ha-543552) Waiting to get IP...
	I0416 16:32:59.617108   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:32:59.617542   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:32:59.617579   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:59.617535   20947 retry.go:31] will retry after 203.520709ms: waiting for machine to come up
	I0416 16:32:59.822929   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:32:59.823289   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:32:59.823319   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:32:59.823277   20947 retry.go:31] will retry after 286.775995ms: waiting for machine to come up
	I0416 16:33:00.111725   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:00.112119   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:00.112144   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:00.112091   20947 retry.go:31] will retry after 373.736633ms: waiting for machine to come up
	I0416 16:33:00.487537   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:00.487898   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:00.487925   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:00.487849   20947 retry.go:31] will retry after 510.897921ms: waiting for machine to come up
	I0416 16:33:01.000715   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:01.001195   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:01.001219   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:01.001149   20947 retry.go:31] will retry after 676.370357ms: waiting for machine to come up
	I0416 16:33:01.679005   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:01.679416   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:01.679442   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:01.679364   20947 retry.go:31] will retry after 583.153779ms: waiting for machine to come up
	I0416 16:33:02.264118   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:02.264453   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:02.264491   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:02.264416   20947 retry.go:31] will retry after 784.977619ms: waiting for machine to come up
	I0416 16:33:03.051094   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:03.051492   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:03.051522   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:03.051431   20947 retry.go:31] will retry after 955.233152ms: waiting for machine to come up
	I0416 16:33:04.008677   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:04.009096   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:04.009124   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:04.009061   20947 retry.go:31] will retry after 1.709366699s: waiting for machine to come up
	I0416 16:33:05.720765   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:05.721119   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:05.721145   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:05.721084   20947 retry.go:31] will retry after 1.476164434s: waiting for machine to come up
	I0416 16:33:07.199821   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:07.200308   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:07.200331   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:07.200274   20947 retry.go:31] will retry after 2.756833s: waiting for machine to come up
	I0416 16:33:09.960071   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:09.960473   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:09.960502   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:09.960424   20947 retry.go:31] will retry after 2.969177743s: waiting for machine to come up
	I0416 16:33:12.931400   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:12.931807   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:12.931840   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:12.931755   20947 retry.go:31] will retry after 3.498551484s: waiting for machine to come up
	I0416 16:33:16.434396   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:16.434808   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find current IP address of domain ha-543552 in network mk-ha-543552
	I0416 16:33:16.434828   20924 main.go:141] libmachine: (ha-543552) DBG | I0416 16:33:16.434772   20947 retry.go:31] will retry after 4.44313934s: waiting for machine to come up
	I0416 16:33:20.881352   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.881820   20924 main.go:141] libmachine: (ha-543552) Found IP for machine: 192.168.39.97
	I0416 16:33:20.881865   20924 main.go:141] libmachine: (ha-543552) Reserving static IP address...
	I0416 16:33:20.881881   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has current primary IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.882159   20924 main.go:141] libmachine: (ha-543552) DBG | unable to find host DHCP lease matching {name: "ha-543552", mac: "52:54:00:3d:bc:28", ip: "192.168.39.97"} in network mk-ha-543552
	I0416 16:33:20.950850   20924 main.go:141] libmachine: (ha-543552) DBG | Getting to WaitForSSH function...
	I0416 16:33:20.950888   20924 main.go:141] libmachine: (ha-543552) Reserved static IP address: 192.168.39.97
	I0416 16:33:20.950923   20924 main.go:141] libmachine: (ha-543552) Waiting for SSH to be available...
	I0416 16:33:20.953231   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.953634   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:20.953659   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:20.953782   20924 main.go:141] libmachine: (ha-543552) DBG | Using SSH client type: external
	I0416 16:33:20.953799   20924 main.go:141] libmachine: (ha-543552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa (-rw-------)
	I0416 16:33:20.953834   20924 main.go:141] libmachine: (ha-543552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:33:20.953864   20924 main.go:141] libmachine: (ha-543552) DBG | About to run SSH command:
	I0416 16:33:20.953878   20924 main.go:141] libmachine: (ha-543552) DBG | exit 0
	I0416 16:33:21.081004   20924 main.go:141] libmachine: (ha-543552) DBG | SSH cmd err, output: <nil>: 
	I0416 16:33:21.081285   20924 main.go:141] libmachine: (ha-543552) KVM machine creation complete!
	I0416 16:33:21.081606   20924 main.go:141] libmachine: (ha-543552) Calling .GetConfigRaw
	I0416 16:33:21.082145   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:21.082313   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:21.082484   20924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:33:21.082496   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:21.083606   20924 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:33:21.083618   20924 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:33:21.083623   20924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:33:21.083628   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.085909   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.086318   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.086335   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.086464   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.086638   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.086822   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.087023   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.087190   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.087364   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.087375   20924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:33:21.196513   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:33:21.196540   20924 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:33:21.196550   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.199187   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.199528   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.199558   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.199696   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.199893   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.200061   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.200149   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.200319   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.200485   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.200495   20924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:33:21.310711   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:33:21.310773   20924 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:33:21.310787   20924 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:33:21.310800   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:33:21.311070   20924 buildroot.go:166] provisioning hostname "ha-543552"
	I0416 16:33:21.311094   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:33:21.311296   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.313651   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.313957   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.313985   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.314090   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.314269   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.314450   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.314590   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.314734   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.314924   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.314938   20924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552 && echo "ha-543552" | sudo tee /etc/hostname
	I0416 16:33:21.436909   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552
	
	I0416 16:33:21.436936   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.439460   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.439772   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.439802   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.439937   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.440119   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.440378   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.440540   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.440727   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:21.440925   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:21.440942   20924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:33:21.559273   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:33:21.559299   20924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:33:21.559315   20924 buildroot.go:174] setting up certificates
	I0416 16:33:21.559338   20924 provision.go:84] configureAuth start
	I0416 16:33:21.559346   20924 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:33:21.559637   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:21.562099   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.562405   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.562437   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.562585   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.564678   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.564968   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.564993   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.565087   20924 provision.go:143] copyHostCerts
	I0416 16:33:21.565110   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:33:21.565149   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:33:21.565165   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:33:21.565231   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:33:21.565315   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:33:21.565332   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:33:21.565339   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:33:21.565361   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:33:21.565412   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:33:21.565434   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:33:21.565441   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:33:21.565461   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:33:21.565517   20924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552 san=[127.0.0.1 192.168.39.97 ha-543552 localhost minikube]
	I0416 16:33:21.857459   20924 provision.go:177] copyRemoteCerts
	I0416 16:33:21.857512   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:33:21.857531   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:21.860096   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.860371   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:21.860401   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:21.860552   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:21.860729   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:21.860922   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:21.861051   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:21.944615   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:33:21.944674   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:33:21.971869   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:33:21.971929   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:33:21.997689   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:33:21.997758   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 16:33:22.022986   20924 provision.go:87] duration metric: took 463.635224ms to configureAuth
	I0416 16:33:22.023016   20924 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:33:22.023191   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:33:22.023303   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.025890   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.026338   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.026365   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.026539   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.026727   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.026880   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.027026   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.027234   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:22.027382   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:22.027397   20924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:33:22.303097   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:33:22.303126   20924 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:33:22.303135   20924 main.go:141] libmachine: (ha-543552) Calling .GetURL
	I0416 16:33:22.304367   20924 main.go:141] libmachine: (ha-543552) DBG | Using libvirt version 6000000
	I0416 16:33:22.307123   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.307554   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.307591   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.307768   20924 main.go:141] libmachine: Docker is up and running!
	I0416 16:33:22.307780   20924 main.go:141] libmachine: Reticulating splines...
	I0416 16:33:22.307786   20924 client.go:171] duration metric: took 24.407201533s to LocalClient.Create
	I0416 16:33:22.307808   20924 start.go:167] duration metric: took 24.407260974s to libmachine.API.Create "ha-543552"
	I0416 16:33:22.307821   20924 start.go:293] postStartSetup for "ha-543552" (driver="kvm2")
	I0416 16:33:22.307836   20924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:33:22.307853   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.308090   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:33:22.308113   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.310239   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.310570   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.310618   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.310700   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.310915   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.311071   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.311234   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:22.396940   20924 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:33:22.401934   20924 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:33:22.401955   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:33:22.402019   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:33:22.402135   20924 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:33:22.402147   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:33:22.402252   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:33:22.413322   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:33:22.440572   20924 start.go:296] duration metric: took 132.736085ms for postStartSetup
	I0416 16:33:22.440628   20924 main.go:141] libmachine: (ha-543552) Calling .GetConfigRaw
	I0416 16:33:22.441238   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:22.443669   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.443957   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.443987   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.444201   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:33:22.444404   20924 start.go:128] duration metric: took 24.561721857s to createHost
	I0416 16:33:22.444431   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.446660   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.447027   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.447055   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.447184   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.447370   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.447525   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.447667   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.447819   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:33:22.447971   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:33:22.447985   20924 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:33:22.554052   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285202.524318982
	
	I0416 16:33:22.554106   20924 fix.go:216] guest clock: 1713285202.524318982
	I0416 16:33:22.554118   20924 fix.go:229] Guest: 2024-04-16 16:33:22.524318982 +0000 UTC Remote: 2024-04-16 16:33:22.444419438 +0000 UTC m=+24.679599031 (delta=79.899544ms)
	I0416 16:33:22.554170   20924 fix.go:200] guest clock delta is within tolerance: 79.899544ms
	I0416 16:33:22.554179   20924 start.go:83] releasing machines lock for "ha-543552", held for 24.671583823s
	I0416 16:33:22.554209   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.554476   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:22.557142   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.557527   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.557549   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.557678   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.558116   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.558288   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:22.558374   20924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:33:22.558415   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.558473   20924 ssh_runner.go:195] Run: cat /version.json
	I0416 16:33:22.558492   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:22.561057   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561248   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561388   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.561415   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561566   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.561578   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:22.561615   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:22.561735   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.561812   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:22.561886   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.561983   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:22.562048   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:22.562378   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:22.562541   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:22.661992   20924 ssh_runner.go:195] Run: systemctl --version
	I0416 16:33:22.668411   20924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:33:22.850856   20924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:33:22.857543   20924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:33:22.857605   20924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:33:22.876670   20924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:33:22.876692   20924 start.go:494] detecting cgroup driver to use...
	I0416 16:33:22.876750   20924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:33:22.894470   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:33:22.909759   20924 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:33:22.909800   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:33:22.925012   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:33:22.940185   20924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:33:23.070168   20924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:33:23.226306   20924 docker.go:233] disabling docker service ...
	I0416 16:33:23.226362   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:33:23.242582   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:33:23.257400   20924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:33:23.415840   20924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:33:23.550004   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:33:23.565816   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:33:23.586337   20924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:33:23.586393   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.598380   20924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:33:23.598438   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.610706   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.623468   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.636111   20924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:33:23.648735   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.661408   20924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.680156   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:33:23.692325   20924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:33:23.703218   20924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:33:23.703260   20924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:33:23.717544   20924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:33:23.728628   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:33:23.861080   20924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:33:24.009175   20924 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:33:24.009237   20924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:33:24.014530   20924 start.go:562] Will wait 60s for crictl version
	I0416 16:33:24.014581   20924 ssh_runner.go:195] Run: which crictl
	I0416 16:33:24.018826   20924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:33:24.060662   20924 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:33:24.060753   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:33:24.092035   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:33:24.124827   20924 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:33:24.126217   20924 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:33:24.128565   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:24.128929   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:24.128964   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:24.129143   20924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:33:24.133807   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:33:24.148663   20924 kubeadm.go:877] updating cluster {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:33:24.148750   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:33:24.148787   20924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:33:24.186056   20924 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 16:33:24.186112   20924 ssh_runner.go:195] Run: which lz4
	I0416 16:33:24.190645   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0416 16:33:24.190725   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 16:33:24.195390   20924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 16:33:24.195421   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 16:33:25.821813   20924 crio.go:462] duration metric: took 1.631118235s to copy over tarball
	I0416 16:33:25.821869   20924 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 16:33:28.267640   20924 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.445730533s)
	I0416 16:33:28.267671   20924 crio.go:469] duration metric: took 2.445835938s to extract the tarball
	I0416 16:33:28.267680   20924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 16:33:28.307685   20924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:33:28.358068   20924 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 16:33:28.358087   20924 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:33:28.358096   20924 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.29.3 crio true true} ...
	I0416 16:33:28.358205   20924 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:33:28.358291   20924 ssh_runner.go:195] Run: crio config
	I0416 16:33:28.408507   20924 cni.go:84] Creating CNI manager for ""
	I0416 16:33:28.408525   20924 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:33:28.408535   20924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:33:28.408560   20924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-543552 NodeName:ha-543552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:33:28.408717   20924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-543552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:33:28.408782   20924 kube-vip.go:111] generating kube-vip config ...
	I0416 16:33:28.408833   20924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:33:28.429384   20924 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:33:28.429473   20924 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:33:28.429518   20924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:33:28.440588   20924 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:33:28.440647   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:33:28.451318   20924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0416 16:33:28.469233   20924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:33:28.486759   20924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0416 16:33:28.504631   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0416 16:33:28.522061   20924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:33:28.526296   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:33:28.539836   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:33:28.673751   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:33:28.694400   20924 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.97
	I0416 16:33:28.694425   20924 certs.go:194] generating shared ca certs ...
	I0416 16:33:28.694444   20924 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:28.694591   20924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:33:28.694764   20924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:33:28.694790   20924 certs.go:256] generating profile certs ...
	I0416 16:33:28.694945   20924 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:33:28.694970   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt with IP's: []
	I0416 16:33:28.900640   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt ...
	I0416 16:33:28.900667   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt: {Name:mkeddd79b0699f023de470f3c894250355f52b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:28.900825   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key ...
	I0416 16:33:28.900845   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key: {Name:mk778c520f35b379c5cb8ee5fa6157173989ee30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:28.900917   20924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c
	I0416 16:33:28.900932   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.254]
	I0416 16:33:29.076089   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c ...
	I0416 16:33:29.076118   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c: {Name:mk77f2b79f2ee01a60e1efd721f633a59434e4fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.076254   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c ...
	I0416 16:33:29.076266   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c: {Name:mk218623fb54360b6300d702d2b43eaa73a10572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.076336   20924 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.ee9cf71c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:33:29.076401   20924 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.ee9cf71c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:33:29.076451   20924 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:33:29.076466   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt with IP's: []
	I0416 16:33:29.321438   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt ...
	I0416 16:33:29.321500   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt: {Name:mk72aa09e0d8e03c926655a8adab62b8941eb403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.321640   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key ...
	I0416 16:33:29.321651   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key: {Name:mk01d0762bf550e927b05c2d906ac33d7efe3fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:29.321711   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:33:29.321727   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:33:29.321737   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:33:29.321750   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:33:29.321759   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:33:29.321769   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:33:29.321782   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:33:29.321791   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:33:29.321845   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:33:29.321880   20924 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:33:29.321890   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:33:29.321909   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:33:29.321934   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:33:29.321955   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:33:29.321995   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:33:29.322023   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.322036   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.322054   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.322566   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:33:29.358552   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:33:29.385459   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:33:29.412086   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:33:29.438700   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 16:33:29.467983   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:33:29.523599   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:33:29.550279   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:33:29.578043   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:33:29.605815   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:33:29.632735   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:33:29.658854   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:33:29.679406   20924 ssh_runner.go:195] Run: openssl version
	I0416 16:33:29.686044   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:33:29.699832   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.705110   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.705174   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:33:29.711814   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:33:29.726799   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:33:29.740775   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.746029   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.746080   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:33:29.752395   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:33:29.765346   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:33:29.778120   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.783119   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.783176   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:33:29.789206   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:33:29.802387   20924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:33:29.807192   20924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:33:29.807249   20924 kubeadm.go:391] StartCluster: {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:33:29.807338   20924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 16:33:29.807409   20924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:33:29.851735   20924 cri.go:89] found id: ""
	I0416 16:33:29.851797   20924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 16:33:29.863644   20924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 16:33:29.874774   20924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 16:33:29.886033   20924 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 16:33:29.886053   20924 kubeadm.go:156] found existing configuration files:
	
	I0416 16:33:29.886092   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 16:33:29.897071   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 16:33:29.897122   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 16:33:29.908766   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 16:33:29.921517   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 16:33:29.921572   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 16:33:29.932682   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 16:33:29.943247   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 16:33:29.943291   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 16:33:29.954112   20924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 16:33:29.964622   20924 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 16:33:29.964678   20924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 16:33:29.975428   20924 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 16:33:30.235876   20924 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 16:33:41.323894   20924 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 16:33:41.323967   20924 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 16:33:41.324068   20924 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 16:33:41.324233   20924 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 16:33:41.324364   20924 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 16:33:41.324450   20924 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 16:33:41.326013   20924 out.go:204]   - Generating certificates and keys ...
	I0416 16:33:41.326107   20924 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 16:33:41.326199   20924 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 16:33:41.326286   20924 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 16:33:41.326358   20924 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 16:33:41.326438   20924 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 16:33:41.326510   20924 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 16:33:41.326580   20924 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 16:33:41.326732   20924 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-543552 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0416 16:33:41.326804   20924 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 16:33:41.326972   20924 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-543552 localhost] and IPs [192.168.39.97 127.0.0.1 ::1]
	I0416 16:33:41.327069   20924 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 16:33:41.327163   20924 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 16:33:41.327220   20924 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 16:33:41.327302   20924 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 16:33:41.327388   20924 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 16:33:41.327483   20924 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 16:33:41.327555   20924 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 16:33:41.327638   20924 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 16:33:41.327716   20924 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 16:33:41.327824   20924 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 16:33:41.327929   20924 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 16:33:41.329644   20924 out.go:204]   - Booting up control plane ...
	I0416 16:33:41.329762   20924 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 16:33:41.329862   20924 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 16:33:41.329946   20924 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 16:33:41.330098   20924 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 16:33:41.330213   20924 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 16:33:41.330263   20924 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 16:33:41.330454   20924 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 16:33:41.330551   20924 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.572752 seconds
	I0416 16:33:41.330688   20924 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 16:33:41.330833   20924 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 16:33:41.330921   20924 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 16:33:41.331143   20924 kubeadm.go:309] [mark-control-plane] Marking the node ha-543552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 16:33:41.331217   20924 kubeadm.go:309] [bootstrap-token] Using token: wi0m3o.dddy96d54tiolpuf
	I0416 16:33:41.332767   20924 out.go:204]   - Configuring RBAC rules ...
	I0416 16:33:41.332879   20924 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 16:33:41.332976   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 16:33:41.333100   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 16:33:41.333211   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 16:33:41.333390   20924 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 16:33:41.333502   20924 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 16:33:41.333666   20924 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 16:33:41.333723   20924 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 16:33:41.333791   20924 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 16:33:41.333803   20924 kubeadm.go:309] 
	I0416 16:33:41.333871   20924 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 16:33:41.333890   20924 kubeadm.go:309] 
	I0416 16:33:41.333997   20924 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 16:33:41.334009   20924 kubeadm.go:309] 
	I0416 16:33:41.334033   20924 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 16:33:41.334083   20924 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 16:33:41.334132   20924 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 16:33:41.334138   20924 kubeadm.go:309] 
	I0416 16:33:41.334200   20924 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 16:33:41.334207   20924 kubeadm.go:309] 
	I0416 16:33:41.334249   20924 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 16:33:41.334255   20924 kubeadm.go:309] 
	I0416 16:33:41.334296   20924 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 16:33:41.334394   20924 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 16:33:41.334489   20924 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 16:33:41.334504   20924 kubeadm.go:309] 
	I0416 16:33:41.334623   20924 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 16:33:41.334725   20924 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 16:33:41.334735   20924 kubeadm.go:309] 
	I0416 16:33:41.334856   20924 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wi0m3o.dddy96d54tiolpuf \
	I0416 16:33:41.335001   20924 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 16:33:41.335040   20924 kubeadm.go:309] 	--control-plane 
	I0416 16:33:41.335049   20924 kubeadm.go:309] 
	I0416 16:33:41.335128   20924 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 16:33:41.335135   20924 kubeadm.go:309] 
	I0416 16:33:41.335200   20924 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wi0m3o.dddy96d54tiolpuf \
	I0416 16:33:41.335305   20924 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 16:33:41.335317   20924 cni.go:84] Creating CNI manager for ""
	I0416 16:33:41.335323   20924 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0416 16:33:41.337825   20924 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0416 16:33:41.339512   20924 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 16:33:41.369677   20924 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 16:33:41.369700   20924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0416 16:33:41.435637   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 16:33:41.897389   20924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 16:33:41.897457   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:41.897501   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-543552 minikube.k8s.io/updated_at=2024_04_16T16_33_41_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-543552 minikube.k8s.io/primary=true
	I0416 16:33:42.040936   20924 ops.go:34] apiserver oom_adj: -16
	I0416 16:33:42.041306   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:42.541747   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:43.042279   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:43.542385   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:44.041711   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:44.542185   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:45.041624   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:45.541699   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:46.041747   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:46.542056   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:47.041583   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:47.541718   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:48.041939   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:48.541494   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:49.041708   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:49.541982   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:50.042320   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:50.541440   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:51.041601   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:51.541493   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:52.041436   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:52.542239   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:53.041938   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:53.541442   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:54.041409   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 16:33:54.240330   20924 kubeadm.go:1107] duration metric: took 12.342931074s to wait for elevateKubeSystemPrivileges
	W0416 16:33:54.240375   20924 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 16:33:54.240385   20924 kubeadm.go:393] duration metric: took 24.433140902s to StartCluster
	I0416 16:33:54.240406   20924 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:54.240495   20924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:33:54.241518   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:33:54.241791   20924 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:33:54.241827   20924 start.go:240] waiting for startup goroutines ...
	I0416 16:33:54.241812   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 16:33:54.241843   20924 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 16:33:54.241939   20924 addons.go:69] Setting storage-provisioner=true in profile "ha-543552"
	I0416 16:33:54.241976   20924 addons.go:234] Setting addon storage-provisioner=true in "ha-543552"
	I0416 16:33:54.242014   20924 addons.go:69] Setting default-storageclass=true in profile "ha-543552"
	I0416 16:33:54.242026   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:33:54.242060   20924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-543552"
	I0416 16:33:54.242022   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:33:54.242450   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.242484   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.242511   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.242541   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.257510   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0416 16:33:54.257532   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I0416 16:33:54.258023   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.258105   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.258562   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.258584   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.258788   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.258812   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.258949   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.259144   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.259315   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:54.259554   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.259606   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.261751   20924 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:33:54.262097   20924 kapi.go:59] client config for ha-543552: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 16:33:54.262645   20924 cert_rotation.go:137] Starting client certificate rotation controller
	I0416 16:33:54.262794   20924 addons.go:234] Setting addon default-storageclass=true in "ha-543552"
	I0416 16:33:54.262836   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:33:54.263200   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.263239   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.274303   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0416 16:33:54.274880   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.275393   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.275420   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.275786   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.275982   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:54.277617   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:54.279608   20924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 16:33:54.278047   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0416 16:33:54.280037   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.281169   20924 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:33:54.281182   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 16:33:54.281194   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:54.281646   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.281672   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.282001   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.282544   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:54.282572   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:54.284227   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.284666   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:54.284688   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.284720   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:54.284885   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:54.285087   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:54.285221   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:54.298327   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0416 16:33:54.298741   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:54.299215   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:54.299239   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:54.299563   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:54.299759   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:33:54.301278   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:33:54.301546   20924 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 16:33:54.301562   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 16:33:54.301580   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:33:54.304804   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.305209   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:33:54.305235   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:33:54.305413   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:33:54.305611   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:33:54.305768   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:33:54.305927   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:33:54.431124   20924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 16:33:54.491826   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 16:33:54.500737   20924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 16:33:54.785677   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:54.785705   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:54.785989   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:54.786000   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:54.786016   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:54.786032   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:54.786040   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:54.786277   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:54.786295   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:54.786330   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:54.786402   20924 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0416 16:33:54.786414   20924 round_trippers.go:469] Request Headers:
	I0416 16:33:54.786422   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:33:54.786426   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:33:54.794932   20924 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 16:33:54.795695   20924 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0416 16:33:54.795712   20924 round_trippers.go:469] Request Headers:
	I0416 16:33:54.795723   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:33:54.795728   20924 round_trippers.go:473]     Content-Type: application/json
	I0416 16:33:54.795732   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:33:54.798453   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:33:54.798579   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:54.798592   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:54.798846   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:54.798879   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:54.798893   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:54.909429   20924 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0416 16:33:55.124934   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:55.124958   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:55.125260   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:55.125279   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:55.125287   20924 main.go:141] libmachine: Making call to close driver server
	I0416 16:33:55.125285   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:55.125296   20924 main.go:141] libmachine: (ha-543552) Calling .Close
	I0416 16:33:55.125504   20924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 16:33:55.125518   20924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 16:33:55.125531   20924 main.go:141] libmachine: (ha-543552) DBG | Closing plugin on server side
	I0416 16:33:55.127553   20924 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0416 16:33:55.128988   20924 addons.go:505] duration metric: took 887.155371ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0416 16:33:55.129028   20924 start.go:245] waiting for cluster config update ...
	I0416 16:33:55.129045   20924 start.go:254] writing updated cluster config ...
	I0416 16:33:55.130989   20924 out.go:177] 
	I0416 16:33:55.132535   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:33:55.132650   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:33:55.134505   20924 out.go:177] * Starting "ha-543552-m02" control-plane node in "ha-543552" cluster
	I0416 16:33:55.135806   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:33:55.135835   20924 cache.go:56] Caching tarball of preloaded images
	I0416 16:33:55.135935   20924 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:33:55.135956   20924 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:33:55.136048   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:33:55.136226   20924 start.go:360] acquireMachinesLock for ha-543552-m02: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:33:55.136269   20924 start.go:364] duration metric: took 24.383µs to acquireMachinesLock for "ha-543552-m02"
	I0416 16:33:55.136288   20924 start.go:93] Provisioning new machine with config: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:33:55.136358   20924 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0416 16:33:55.137854   20924 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:33:55.137934   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:33:55.137960   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:33:55.151981   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0416 16:33:55.152324   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:33:55.152809   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:33:55.152845   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:33:55.153127   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:33:55.153373   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:33:55.153509   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:33:55.153690   20924 start.go:159] libmachine.API.Create for "ha-543552" (driver="kvm2")
	I0416 16:33:55.153718   20924 client.go:168] LocalClient.Create starting
	I0416 16:33:55.153752   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 16:33:55.153789   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:33:55.153802   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:33:55.153850   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 16:33:55.153877   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:33:55.153888   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:33:55.153904   20924 main.go:141] libmachine: Running pre-create checks...
	I0416 16:33:55.153912   20924 main.go:141] libmachine: (ha-543552-m02) Calling .PreCreateCheck
	I0416 16:33:55.154090   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetConfigRaw
	I0416 16:33:55.154433   20924 main.go:141] libmachine: Creating machine...
	I0416 16:33:55.154448   20924 main.go:141] libmachine: (ha-543552-m02) Calling .Create
	I0416 16:33:55.154580   20924 main.go:141] libmachine: (ha-543552-m02) Creating KVM machine...
	I0416 16:33:55.155669   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found existing default KVM network
	I0416 16:33:55.155761   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found existing private KVM network mk-ha-543552
	I0416 16:33:55.155860   20924 main.go:141] libmachine: (ha-543552-m02) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02 ...
	I0416 16:33:55.155914   20924 main.go:141] libmachine: (ha-543552-m02) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:33:55.155935   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.155835   21317 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:33:55.156018   20924 main.go:141] libmachine: (ha-543552-m02) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:33:55.391752   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.391649   21317 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa...
	I0416 16:33:55.544429   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.544327   21317 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/ha-543552-m02.rawdisk...
	I0416 16:33:55.544456   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Writing magic tar header
	I0416 16:33:55.544466   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Writing SSH key tar header
	I0416 16:33:55.544474   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:55.544430   21317 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02 ...
	I0416 16:33:55.544557   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02
	I0416 16:33:55.544596   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02 (perms=drwx------)
	I0416 16:33:55.544621   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:33:55.544637   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 16:33:55.544655   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:33:55.544671   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 16:33:55.544682   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 16:33:55.544696   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 16:33:55.544705   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:33:55.544722   20924 main.go:141] libmachine: (ha-543552-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:33:55.544730   20924 main.go:141] libmachine: (ha-543552-m02) Creating domain...
	I0416 16:33:55.544751   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:33:55.544767   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:33:55.544779   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Checking permissions on dir: /home
	I0416 16:33:55.544788   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Skipping /home - not owner
	I0416 16:33:55.545596   20924 main.go:141] libmachine: (ha-543552-m02) define libvirt domain using xml: 
	I0416 16:33:55.545616   20924 main.go:141] libmachine: (ha-543552-m02) <domain type='kvm'>
	I0416 16:33:55.545623   20924 main.go:141] libmachine: (ha-543552-m02)   <name>ha-543552-m02</name>
	I0416 16:33:55.545629   20924 main.go:141] libmachine: (ha-543552-m02)   <memory unit='MiB'>2200</memory>
	I0416 16:33:55.545634   20924 main.go:141] libmachine: (ha-543552-m02)   <vcpu>2</vcpu>
	I0416 16:33:55.545642   20924 main.go:141] libmachine: (ha-543552-m02)   <features>
	I0416 16:33:55.545677   20924 main.go:141] libmachine: (ha-543552-m02)     <acpi/>
	I0416 16:33:55.545705   20924 main.go:141] libmachine: (ha-543552-m02)     <apic/>
	I0416 16:33:55.545716   20924 main.go:141] libmachine: (ha-543552-m02)     <pae/>
	I0416 16:33:55.545728   20924 main.go:141] libmachine: (ha-543552-m02)     
	I0416 16:33:55.545738   20924 main.go:141] libmachine: (ha-543552-m02)   </features>
	I0416 16:33:55.545770   20924 main.go:141] libmachine: (ha-543552-m02)   <cpu mode='host-passthrough'>
	I0416 16:33:55.545788   20924 main.go:141] libmachine: (ha-543552-m02)   
	I0416 16:33:55.545798   20924 main.go:141] libmachine: (ha-543552-m02)   </cpu>
	I0416 16:33:55.545811   20924 main.go:141] libmachine: (ha-543552-m02)   <os>
	I0416 16:33:55.545824   20924 main.go:141] libmachine: (ha-543552-m02)     <type>hvm</type>
	I0416 16:33:55.545835   20924 main.go:141] libmachine: (ha-543552-m02)     <boot dev='cdrom'/>
	I0416 16:33:55.545849   20924 main.go:141] libmachine: (ha-543552-m02)     <boot dev='hd'/>
	I0416 16:33:55.545867   20924 main.go:141] libmachine: (ha-543552-m02)     <bootmenu enable='no'/>
	I0416 16:33:55.545881   20924 main.go:141] libmachine: (ha-543552-m02)   </os>
	I0416 16:33:55.545893   20924 main.go:141] libmachine: (ha-543552-m02)   <devices>
	I0416 16:33:55.545909   20924 main.go:141] libmachine: (ha-543552-m02)     <disk type='file' device='cdrom'>
	I0416 16:33:55.545926   20924 main.go:141] libmachine: (ha-543552-m02)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/boot2docker.iso'/>
	I0416 16:33:55.545954   20924 main.go:141] libmachine: (ha-543552-m02)       <target dev='hdc' bus='scsi'/>
	I0416 16:33:55.545976   20924 main.go:141] libmachine: (ha-543552-m02)       <readonly/>
	I0416 16:33:55.545990   20924 main.go:141] libmachine: (ha-543552-m02)     </disk>
	I0416 16:33:55.546003   20924 main.go:141] libmachine: (ha-543552-m02)     <disk type='file' device='disk'>
	I0416 16:33:55.546028   20924 main.go:141] libmachine: (ha-543552-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:33:55.546046   20924 main.go:141] libmachine: (ha-543552-m02)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/ha-543552-m02.rawdisk'/>
	I0416 16:33:55.546057   20924 main.go:141] libmachine: (ha-543552-m02)       <target dev='hda' bus='virtio'/>
	I0416 16:33:55.546062   20924 main.go:141] libmachine: (ha-543552-m02)     </disk>
	I0416 16:33:55.546070   20924 main.go:141] libmachine: (ha-543552-m02)     <interface type='network'>
	I0416 16:33:55.546075   20924 main.go:141] libmachine: (ha-543552-m02)       <source network='mk-ha-543552'/>
	I0416 16:33:55.546084   20924 main.go:141] libmachine: (ha-543552-m02)       <model type='virtio'/>
	I0416 16:33:55.546091   20924 main.go:141] libmachine: (ha-543552-m02)     </interface>
	I0416 16:33:55.546099   20924 main.go:141] libmachine: (ha-543552-m02)     <interface type='network'>
	I0416 16:33:55.546108   20924 main.go:141] libmachine: (ha-543552-m02)       <source network='default'/>
	I0416 16:33:55.546120   20924 main.go:141] libmachine: (ha-543552-m02)       <model type='virtio'/>
	I0416 16:33:55.546128   20924 main.go:141] libmachine: (ha-543552-m02)     </interface>
	I0416 16:33:55.546136   20924 main.go:141] libmachine: (ha-543552-m02)     <serial type='pty'>
	I0416 16:33:55.546153   20924 main.go:141] libmachine: (ha-543552-m02)       <target port='0'/>
	I0416 16:33:55.546161   20924 main.go:141] libmachine: (ha-543552-m02)     </serial>
	I0416 16:33:55.546168   20924 main.go:141] libmachine: (ha-543552-m02)     <console type='pty'>
	I0416 16:33:55.546174   20924 main.go:141] libmachine: (ha-543552-m02)       <target type='serial' port='0'/>
	I0416 16:33:55.546181   20924 main.go:141] libmachine: (ha-543552-m02)     </console>
	I0416 16:33:55.546187   20924 main.go:141] libmachine: (ha-543552-m02)     <rng model='virtio'>
	I0416 16:33:55.546203   20924 main.go:141] libmachine: (ha-543552-m02)       <backend model='random'>/dev/random</backend>
	I0416 16:33:55.546217   20924 main.go:141] libmachine: (ha-543552-m02)     </rng>
	I0416 16:33:55.546225   20924 main.go:141] libmachine: (ha-543552-m02)     
	I0416 16:33:55.546229   20924 main.go:141] libmachine: (ha-543552-m02)     
	I0416 16:33:55.546235   20924 main.go:141] libmachine: (ha-543552-m02)   </devices>
	I0416 16:33:55.546239   20924 main.go:141] libmachine: (ha-543552-m02) </domain>
	I0416 16:33:55.546250   20924 main.go:141] libmachine: (ha-543552-m02) 
	I0416 16:33:55.553129   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:fb:d7:4e in network default
	I0416 16:33:55.553850   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:55.553863   20924 main.go:141] libmachine: (ha-543552-m02) Ensuring networks are active...
	I0416 16:33:55.554641   20924 main.go:141] libmachine: (ha-543552-m02) Ensuring network default is active
	I0416 16:33:55.554973   20924 main.go:141] libmachine: (ha-543552-m02) Ensuring network mk-ha-543552 is active
	I0416 16:33:55.555440   20924 main.go:141] libmachine: (ha-543552-m02) Getting domain xml...
	I0416 16:33:55.556163   20924 main.go:141] libmachine: (ha-543552-m02) Creating domain...
	I0416 16:33:56.805900   20924 main.go:141] libmachine: (ha-543552-m02) Waiting to get IP...
	I0416 16:33:56.806770   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:56.807175   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:56.807230   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:56.807166   21317 retry.go:31] will retry after 290.248104ms: waiting for machine to come up
	I0416 16:33:57.098662   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:57.099157   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:57.099186   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:57.099114   21317 retry.go:31] will retry after 330.769379ms: waiting for machine to come up
	I0416 16:33:57.431847   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:57.432297   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:57.432322   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:57.432258   21317 retry.go:31] will retry after 366.242177ms: waiting for machine to come up
	I0416 16:33:57.799714   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:57.800180   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:57.800206   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:57.800142   21317 retry.go:31] will retry after 455.971916ms: waiting for machine to come up
	I0416 16:33:58.258614   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:58.259169   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:58.259213   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:58.259131   21317 retry.go:31] will retry after 490.210716ms: waiting for machine to come up
	I0416 16:33:58.750814   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:58.751413   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:58.751442   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:58.751356   21317 retry.go:31] will retry after 828.445668ms: waiting for machine to come up
	I0416 16:33:59.581783   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:33:59.582201   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:33:59.582230   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:33:59.582155   21317 retry.go:31] will retry after 798.686835ms: waiting for machine to come up
	I0416 16:34:00.382679   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:00.383142   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:00.383172   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:00.383042   21317 retry.go:31] will retry after 1.326441349s: waiting for machine to come up
	I0416 16:34:01.711538   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:01.712102   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:01.712126   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:01.712057   21317 retry.go:31] will retry after 1.802384547s: waiting for machine to come up
	I0416 16:34:03.516941   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:03.517457   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:03.517489   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:03.517417   21317 retry.go:31] will retry after 1.596867743s: waiting for machine to come up
	I0416 16:34:05.116164   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:05.116604   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:05.116653   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:05.116537   21317 retry.go:31] will retry after 2.252441268s: waiting for machine to come up
	I0416 16:34:07.371108   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:07.371563   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:07.371580   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:07.371529   21317 retry.go:31] will retry after 2.942887808s: waiting for machine to come up
	I0416 16:34:10.316223   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:10.316554   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:10.316592   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:10.316521   21317 retry.go:31] will retry after 3.833251525s: waiting for machine to come up
	I0416 16:34:14.153828   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:14.154276   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find current IP address of domain ha-543552-m02 in network mk-ha-543552
	I0416 16:34:14.154303   20924 main.go:141] libmachine: (ha-543552-m02) DBG | I0416 16:34:14.154231   21317 retry.go:31] will retry after 4.748429365s: waiting for machine to come up
	I0416 16:34:18.903815   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:18.904267   20924 main.go:141] libmachine: (ha-543552-m02) Found IP for machine: 192.168.39.80
	I0416 16:34:18.904298   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has current primary IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:18.904308   20924 main.go:141] libmachine: (ha-543552-m02) Reserving static IP address...
	I0416 16:34:18.904758   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find host DHCP lease matching {name: "ha-543552-m02", mac: "52:54:00:bd:b0:d7", ip: "192.168.39.80"} in network mk-ha-543552
	I0416 16:34:18.975022   20924 main.go:141] libmachine: (ha-543552-m02) Reserved static IP address: 192.168.39.80
	I0416 16:34:18.975054   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Getting to WaitForSSH function...
	I0416 16:34:18.975061   20924 main.go:141] libmachine: (ha-543552-m02) Waiting for SSH to be available...
	I0416 16:34:18.977405   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:18.977775   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552
	I0416 16:34:18.977801   20924 main.go:141] libmachine: (ha-543552-m02) DBG | unable to find defined IP address of network mk-ha-543552 interface with MAC address 52:54:00:bd:b0:d7
	I0416 16:34:18.977907   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH client type: external
	I0416 16:34:18.977935   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa (-rw-------)
	I0416 16:34:18.977975   20924 main.go:141] libmachine: (ha-543552-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:34:18.977993   20924 main.go:141] libmachine: (ha-543552-m02) DBG | About to run SSH command:
	I0416 16:34:18.978027   20924 main.go:141] libmachine: (ha-543552-m02) DBG | exit 0
	I0416 16:34:18.981475   20924 main.go:141] libmachine: (ha-543552-m02) DBG | SSH cmd err, output: exit status 255: 
	I0416 16:34:18.981493   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0416 16:34:18.981502   20924 main.go:141] libmachine: (ha-543552-m02) DBG | command : exit 0
	I0416 16:34:18.981509   20924 main.go:141] libmachine: (ha-543552-m02) DBG | err     : exit status 255
	I0416 16:34:18.981520   20924 main.go:141] libmachine: (ha-543552-m02) DBG | output  : 
	I0416 16:34:21.983020   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Getting to WaitForSSH function...
	I0416 16:34:21.985687   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:21.986122   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:21.986171   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:21.986264   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH client type: external
	I0416 16:34:21.986281   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa (-rw-------)
	I0416 16:34:21.986334   20924 main.go:141] libmachine: (ha-543552-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:34:21.986377   20924 main.go:141] libmachine: (ha-543552-m02) DBG | About to run SSH command:
	I0416 16:34:21.986387   20924 main.go:141] libmachine: (ha-543552-m02) DBG | exit 0
	I0416 16:34:22.112817   20924 main.go:141] libmachine: (ha-543552-m02) DBG | SSH cmd err, output: <nil>: 
	I0416 16:34:22.113141   20924 main.go:141] libmachine: (ha-543552-m02) KVM machine creation complete!
	I0416 16:34:22.113447   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetConfigRaw
	I0416 16:34:22.113975   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:22.114193   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:22.114344   20924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:34:22.114360   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:34:22.115545   20924 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:34:22.115561   20924 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:34:22.115566   20924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:34:22.115573   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.117775   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.118089   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.118117   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.118217   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.118374   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.118525   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.118662   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.118837   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.119047   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.119060   20924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:34:22.220170   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:34:22.220194   20924 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:34:22.220202   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.222897   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.223254   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.223276   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.223480   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.223679   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.223908   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.224056   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.224273   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.224475   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.224488   20924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:34:22.329931   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:34:22.329984   20924 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:34:22.329990   20924 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:34:22.329998   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:34:22.330226   20924 buildroot.go:166] provisioning hostname "ha-543552-m02"
	I0416 16:34:22.330248   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:34:22.330429   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.332660   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.332974   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.332998   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.333149   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.333316   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.333441   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.333548   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.333677   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.333879   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.333892   20924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552-m02 && echo "ha-543552-m02" | sudo tee /etc/hostname
	I0416 16:34:22.456829   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552-m02
	
	I0416 16:34:22.456878   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.459435   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.459829   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.459874   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.460003   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.460184   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.460334   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.460453   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.460590   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.460820   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.460856   20924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:34:22.575836   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:34:22.575867   20924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:34:22.575897   20924 buildroot.go:174] setting up certificates
	I0416 16:34:22.575907   20924 provision.go:84] configureAuth start
	I0416 16:34:22.575915   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetMachineName
	I0416 16:34:22.576177   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:22.578790   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.579083   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.579112   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.579193   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.581334   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.581677   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.581706   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.581853   20924 provision.go:143] copyHostCerts
	I0416 16:34:22.581893   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:34:22.581925   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:34:22.581935   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:34:22.581995   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:34:22.582060   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:34:22.582077   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:34:22.582083   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:34:22.582108   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:34:22.582146   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:34:22.582162   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:34:22.582168   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:34:22.582187   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:34:22.582228   20924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552-m02 san=[127.0.0.1 192.168.39.80 ha-543552-m02 localhost minikube]
	I0416 16:34:22.771886   20924 provision.go:177] copyRemoteCerts
	I0416 16:34:22.771948   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:34:22.771968   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.774250   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.774576   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.774610   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.774793   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.774976   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.775087   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.775262   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:22.855612   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:34:22.855681   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:34:22.885615   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:34:22.885673   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:34:22.910435   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:34:22.910504   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:34:22.937196   20924 provision.go:87] duration metric: took 361.278852ms to configureAuth
	I0416 16:34:22.937221   20924 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:34:22.937426   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:34:22.937514   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:22.939839   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.940220   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:22.940258   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:22.940424   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:22.940606   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.940789   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:22.940945   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:22.941165   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:22.941376   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:22.941401   20924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:34:23.226298   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:34:23.226327   20924 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:34:23.226337   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetURL
	I0416 16:34:23.227535   20924 main.go:141] libmachine: (ha-543552-m02) DBG | Using libvirt version 6000000
	I0416 16:34:23.229393   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.229765   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.229793   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.229947   20924 main.go:141] libmachine: Docker is up and running!
	I0416 16:34:23.229961   20924 main.go:141] libmachine: Reticulating splines...
	I0416 16:34:23.229967   20924 client.go:171] duration metric: took 28.076240598s to LocalClient.Create
	I0416 16:34:23.229989   20924 start.go:167] duration metric: took 28.076300549s to libmachine.API.Create "ha-543552"
	I0416 16:34:23.229998   20924 start.go:293] postStartSetup for "ha-543552-m02" (driver="kvm2")
	I0416 16:34:23.230009   20924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:34:23.230025   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.230257   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:34:23.230277   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:23.232074   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.232372   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.232401   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.232506   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.232690   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.232805   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.232940   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:23.318833   20924 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:34:23.323995   20924 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:34:23.324019   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:34:23.324090   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:34:23.324176   20924 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:34:23.324187   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:34:23.324288   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:34:23.335957   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:34:23.363475   20924 start.go:296] duration metric: took 133.465137ms for postStartSetup
	I0416 16:34:23.363523   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetConfigRaw
	I0416 16:34:23.364079   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:23.366654   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.366969   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.367002   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.367189   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:34:23.367411   20924 start.go:128] duration metric: took 28.231042081s to createHost
	I0416 16:34:23.367438   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:23.369594   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.369917   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.369945   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.370071   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.370238   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.370374   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.370482   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.370661   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:34:23.370814   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0416 16:34:23.370824   20924 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:34:23.474504   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285263.449803813
	
	I0416 16:34:23.474530   20924 fix.go:216] guest clock: 1713285263.449803813
	I0416 16:34:23.474540   20924 fix.go:229] Guest: 2024-04-16 16:34:23.449803813 +0000 UTC Remote: 2024-04-16 16:34:23.367426008 +0000 UTC m=+85.602605598 (delta=82.377805ms)
	I0416 16:34:23.474562   20924 fix.go:200] guest clock delta is within tolerance: 82.377805ms
	I0416 16:34:23.474570   20924 start.go:83] releasing machines lock for "ha-543552-m02", held for 28.33828969s
	I0416 16:34:23.474597   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.474858   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:23.477502   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.477898   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.477930   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.480172   20924 out.go:177] * Found network options:
	I0416 16:34:23.481476   20924 out.go:177]   - NO_PROXY=192.168.39.97
	W0416 16:34:23.482700   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:34:23.482737   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.483285   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.483471   20924 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:34:23.483533   20924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:34:23.483585   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	W0416 16:34:23.483658   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:34:23.483729   20924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:34:23.483751   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:34:23.485877   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486138   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486273   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.486298   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486404   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.486528   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:23.486553   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:23.486591   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.486738   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.486806   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:34:23.486883   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:23.486982   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:34:23.487114   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:34:23.487236   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:34:23.729972   20924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:34:23.737219   20924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:34:23.737297   20924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:34:23.754231   20924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:34:23.754256   20924 start.go:494] detecting cgroup driver to use...
	I0416 16:34:23.754321   20924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:34:23.771935   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:34:23.786287   20924 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:34:23.786346   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:34:23.800482   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:34:23.814464   20924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:34:23.928514   20924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:34:24.097127   20924 docker.go:233] disabling docker service ...
	I0416 16:34:24.097199   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:34:24.113295   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:34:24.128010   20924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:34:24.274991   20924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:34:24.416672   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:34:24.432104   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:34:24.453292   20924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:34:24.453343   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.464454   20924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:34:24.464520   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.475537   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.486405   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.497553   20924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:34:24.508506   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.519217   20924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.537820   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:34:24.549220   20924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:34:24.560485   20924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:34:24.560526   20924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:34:24.575768   20924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:34:24.585837   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:34:24.715640   20924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:34:24.878193   20924 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:34:24.878290   20924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:34:24.883908   20924 start.go:562] Will wait 60s for crictl version
	I0416 16:34:24.883955   20924 ssh_runner.go:195] Run: which crictl
	I0416 16:34:24.888726   20924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:34:24.929464   20924 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:34:24.929557   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:34:24.962334   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:34:24.999017   20924 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:34:25.000480   20924 out.go:177]   - env NO_PROXY=192.168.39.97
	I0416 16:34:25.001899   20924 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:34:25.004680   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:25.005077   20924 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:34:11 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:34:25.005106   20924 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:34:25.005292   20924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:34:25.009855   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:34:25.024278   20924 mustload.go:65] Loading cluster: ha-543552
	I0416 16:34:25.024447   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:34:25.024689   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:34:25.024713   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:34:25.039191   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I0416 16:34:25.039565   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:34:25.039997   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:34:25.040018   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:34:25.040318   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:34:25.040481   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:34:25.042108   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:34:25.042384   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:34:25.042408   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:34:25.056113   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43169
	I0416 16:34:25.056591   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:34:25.057140   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:34:25.057164   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:34:25.057471   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:34:25.057696   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:34:25.057864   20924 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.80
	I0416 16:34:25.057874   20924 certs.go:194] generating shared ca certs ...
	I0416 16:34:25.057893   20924 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:34:25.058007   20924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:34:25.058050   20924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:34:25.058059   20924 certs.go:256] generating profile certs ...
	I0416 16:34:25.058131   20924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:34:25.058153   20924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c
	I0416 16:34:25.058166   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.80 192.168.39.254]
	I0416 16:34:25.130651   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c ...
	I0416 16:34:25.130681   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c: {Name:mk66a4e33abe84b39a7f3396faacd5c2278877b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:34:25.130868   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c ...
	I0416 16:34:25.130886   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c: {Name:mk2fdedebc09799117b95168bd2138cb3e367cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:34:25.130991   20924 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.46ba120c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:34:25.131123   20924 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.46ba120c -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:34:25.131250   20924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:34:25.131265   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:34:25.131276   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:34:25.131290   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:34:25.131303   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:34:25.131315   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:34:25.131327   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:34:25.131339   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:34:25.131350   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:34:25.131391   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:34:25.131417   20924 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:34:25.131427   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:34:25.131449   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:34:25.131470   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:34:25.131495   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:34:25.131530   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:34:25.131561   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.131575   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.131587   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.131615   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:34:25.134758   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:25.135101   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:34:25.135128   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:25.135262   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:34:25.135460   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:34:25.135626   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:34:25.135765   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:34:25.213221   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0416 16:34:25.218720   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0416 16:34:25.233107   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0416 16:34:25.244022   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0416 16:34:25.255609   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0416 16:34:25.261372   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0416 16:34:25.279105   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0416 16:34:25.284988   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0416 16:34:25.296237   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0416 16:34:25.301027   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0416 16:34:25.312044   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0416 16:34:25.317034   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0416 16:34:25.328728   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:34:25.359168   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:34:25.388786   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:34:25.419258   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:34:25.445596   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 16:34:25.473023   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:34:25.499377   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:34:25.525099   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:34:25.552145   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:34:25.578152   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:34:25.608274   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:34:25.635422   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0416 16:34:25.654110   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0416 16:34:25.672130   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0416 16:34:25.690269   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0416 16:34:25.708055   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0416 16:34:25.725889   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0416 16:34:25.743861   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0416 16:34:25.761978   20924 ssh_runner.go:195] Run: openssl version
	I0416 16:34:25.767681   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:34:25.778660   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.783228   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.783267   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:34:25.789477   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:34:25.800954   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:34:25.813778   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.818775   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.818820   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:34:25.824660   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:34:25.836621   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:34:25.848559   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.853482   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.853524   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:34:25.859490   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:34:25.870940   20924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:34:25.875499   20924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:34:25.875543   20924 kubeadm.go:928] updating node {m02 192.168.39.80 8443 v1.29.3 crio true true} ...
	I0416 16:34:25.875624   20924 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:34:25.875658   20924 kube-vip.go:111] generating kube-vip config ...
	I0416 16:34:25.875694   20924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:34:25.892959   20924 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:34:25.893023   20924 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:34:25.893063   20924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:34:25.903902   20924 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 16:34:25.903968   20924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 16:34:25.914683   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 16:34:25.914718   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:34:25.914794   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:34:25.914815   20924 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0416 16:34:25.914826   20924 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0416 16:34:25.921212   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 16:34:25.921234   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 16:34:26.899745   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:34:26.915916   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:34:26.916010   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:34:26.920725   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 16:34:26.920760   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 16:34:29.403380   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:34:29.403452   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:34:29.408991   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 16:34:29.409020   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 16:34:29.668153   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0416 16:34:29.679072   20924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0416 16:34:29.696668   20924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:34:29.715587   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 16:34:29.733793   20924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:34:29.738240   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:34:29.752017   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:34:29.893269   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:34:29.913168   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:34:29.913662   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:34:29.913718   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:34:29.928180   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I0416 16:34:29.928609   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:34:29.929070   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:34:29.929093   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:34:29.929372   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:34:29.929585   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:34:29.929778   20924 start.go:316] joinCluster: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:34:29.929901   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 16:34:29.929922   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:34:29.932933   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:29.933281   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:34:29.933305   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:34:29.933465   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:34:29.933627   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:34:29.933759   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:34:29.933878   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:34:30.095359   20924 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:34:30.095412   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jubk2w.77e69lakqh5t8imx --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m02 --control-plane --apiserver-advertise-address=192.168.39.80 --apiserver-bind-port=8443"
	I0416 16:34:54.225301   20924 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jubk2w.77e69lakqh5t8imx --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m02 --control-plane --apiserver-advertise-address=192.168.39.80 --apiserver-bind-port=8443": (24.129863771s)
	I0416 16:34:54.225336   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 16:34:54.783553   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-543552-m02 minikube.k8s.io/updated_at=2024_04_16T16_34_54_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-543552 minikube.k8s.io/primary=false
	I0416 16:34:54.915785   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-543552-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0416 16:34:55.053668   20924 start.go:318] duration metric: took 25.123888154s to joinCluster
	I0416 16:34:55.053747   20924 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:34:55.055581   20924 out.go:177] * Verifying Kubernetes components...
	I0416 16:34:55.054049   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:34:55.056966   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:34:55.321939   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:34:55.359456   20924 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:34:55.359860   20924 kapi.go:59] client config for ha-543552: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0416 16:34:55.359945   20924 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0416 16:34:55.360202   20924 node_ready.go:35] waiting up to 6m0s for node "ha-543552-m02" to be "Ready" ...
	I0416 16:34:55.360301   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:55.360309   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:55.360319   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:55.360328   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:55.371769   20924 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0416 16:34:55.860690   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:55.860710   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:55.860718   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:55.860724   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:55.872958   20924 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0416 16:34:56.360788   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:56.360809   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:56.360817   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:56.360822   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:56.364800   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:56.861251   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:56.861273   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:56.861281   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:56.861286   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:56.866167   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:34:57.360477   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:57.360499   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:57.360507   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:57.360511   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:57.364611   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:34:57.365440   20924 node_ready.go:53] node "ha-543552-m02" has status "Ready":"False"
	I0416 16:34:57.860989   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:57.861011   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:57.861020   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:57.861024   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:57.863836   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.360808   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:58.360832   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.360855   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.360863   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.364722   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.365505   20924 node_ready.go:49] node "ha-543552-m02" has status "Ready":"True"
	I0416 16:34:58.365519   20924 node_ready.go:38] duration metric: took 3.00529425s for node "ha-543552-m02" to be "Ready" ...
	I0416 16:34:58.365527   20924 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:34:58.365586   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:34:58.365596   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.365602   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.365606   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.371176   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:34:58.378037   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.378119   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-k7bn7
	I0416 16:34:58.378128   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.378135   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.378139   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.381456   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.382076   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:34:58.382090   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.382097   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.382101   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.384679   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.385141   20924 pod_ready.go:92] pod "coredns-76f75df574-k7bn7" in "kube-system" namespace has status "Ready":"True"
	I0416 16:34:58.385156   20924 pod_ready.go:81] duration metric: took 7.099248ms for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.385163   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.385202   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-l9zck
	I0416 16:34:58.385210   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.385216   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.385220   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.387884   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.388813   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:34:58.388830   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.388857   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.388865   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.391307   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.391937   20924 pod_ready.go:92] pod "coredns-76f75df574-l9zck" in "kube-system" namespace has status "Ready":"True"
	I0416 16:34:58.391952   20924 pod_ready.go:81] duration metric: took 6.783007ms for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.391962   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.392016   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552
	I0416 16:34:58.392027   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.392036   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.392044   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.394388   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.395127   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:34:58.395140   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.395147   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.395151   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.397646   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.398135   20924 pod_ready.go:92] pod "etcd-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:34:58.398150   20924 pod_ready.go:81] duration metric: took 6.181338ms for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.398160   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:34:58.398213   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:58.398225   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.398235   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.398241   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.400559   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:58.401292   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:58.401305   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.401313   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.401317   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.404804   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.898417   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:58.898442   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.898453   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.898460   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.901864   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:58.902909   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:58.902921   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:58.902929   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:58.902933   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:58.905783   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:59.398680   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:59.398708   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.398720   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.398727   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.402509   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:34:59.403286   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:59.403307   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.403318   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.403324   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.406316   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:34:59.898348   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:34:59.898370   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.898380   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.898386   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.903211   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:34:59.903838   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:34:59.903852   20924 round_trippers.go:469] Request Headers:
	I0416 16:34:59.903860   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:34:59.903865   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:34:59.907217   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:00.398582   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:00.398603   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.398610   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.398615   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.402093   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:00.403072   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:00.403087   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.403095   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.403099   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.405948   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:00.406463   20924 pod_ready.go:102] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"False"
	I0416 16:35:00.898723   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:00.898743   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.898757   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.898765   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.904026   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:35:00.904960   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:00.904973   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:00.904981   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:00.904986   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:00.907204   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:01.399369   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:01.399390   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.399399   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.399403   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.403490   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:01.404599   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:01.404614   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.404619   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.404622   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.407503   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:01.898558   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:01.898578   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.898586   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.898590   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.901958   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:01.903193   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:01.903209   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:01.903219   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:01.903227   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:01.906563   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.398491   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:02.398513   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.398521   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.398525   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.402238   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.403169   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:02.403186   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.403196   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.403201   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.406452   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.407304   20924 pod_ready.go:102] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"False"
	I0416 16:35:02.898980   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:02.899004   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.899014   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.899019   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.902452   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:02.903310   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:02.903327   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:02.903338   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:02.903343   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:02.905915   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.398357   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:03.398379   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.398387   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.398391   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.402364   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.403190   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:03.403204   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.403210   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.403214   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.405992   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.899146   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:35:03.899166   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.899172   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.899176   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.905963   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:35:03.907540   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:03.907558   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.907569   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.907576   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.911163   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.911737   20924 pod_ready.go:92] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.911753   20924 pod_ready.go:81] duration metric: took 5.513586854s for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.911766   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.911811   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552
	I0416 16:35:03.911819   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.911827   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.911830   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.914804   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.915395   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:03.915408   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.915414   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.915419   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.919024   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.919560   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.919575   20924 pod_ready.go:81] duration metric: took 7.803617ms for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.919582   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.919623   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m02
	I0416 16:35:03.919633   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.919639   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.919644   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.922948   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.923571   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:03.923584   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.923593   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.923600   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.926632   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.927334   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.927350   20924 pod_ready.go:81] duration metric: took 7.76232ms for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.927359   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.927399   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552
	I0416 16:35:03.927407   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.927414   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.927418   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.930348   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:35:03.961202   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:03.961236   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:03.961244   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:03.961249   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:03.964893   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:03.965508   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:03.965529   20924 pod_ready.go:81] duration metric: took 38.160856ms for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:03.965541   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.160906   20924 request.go:629] Waited for 195.30531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:35:04.160974   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:35:04.160979   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.160987   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.160992   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.164563   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:04.361532   20924 request.go:629] Waited for 196.076002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.361589   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.361594   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.361605   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.361610   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.365413   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:04.366011   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:04.366031   20924 pod_ready.go:81] duration metric: took 400.48186ms for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.366043   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.561178   20924 request.go:629] Waited for 195.070796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:35:04.561249   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:35:04.561254   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.561261   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.561267   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.565398   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:04.761609   20924 request.go:629] Waited for 195.41036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.761692   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:04.761711   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.761722   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.761728   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.766577   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:04.767315   20924 pod_ready.go:92] pod "kube-proxy-2vkts" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:04.767332   20924 pod_ready.go:81] duration metric: took 401.282798ms for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.767341   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:04.961840   20924 request.go:629] Waited for 194.444607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:35:04.961905   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:35:04.961910   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:04.961916   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:04.961920   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:04.964936   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.161673   20924 request.go:629] Waited for 195.78813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.161739   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.161745   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.161753   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.161759   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.165689   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.166550   20924 pod_ready.go:92] pod "kube-proxy-c9lhc" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:05.166573   20924 pod_ready.go:81] duration metric: took 399.225298ms for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.166585   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.361662   20924 request.go:629] Waited for 195.004558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:35:05.361719   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:35:05.361724   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.361732   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.361737   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.365411   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.560782   20924 request.go:629] Waited for 194.277771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.560854   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:35:05.560873   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.560881   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.560885   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.564496   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.565425   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:05.565443   20924 pod_ready.go:81] duration metric: took 398.851526ms for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.565452   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.761506   20924 request.go:629] Waited for 195.996627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:35:05.761591   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:35:05.761604   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.761615   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.761623   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.765648   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:35:05.961805   20924 request.go:629] Waited for 195.376797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:05.961869   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:35:05.961889   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.961904   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.961910   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.965893   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:05.966659   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:35:05.966685   20924 pod_ready.go:81] duration metric: took 401.226092ms for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:35:05.966699   20924 pod_ready.go:38] duration metric: took 7.601162362s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:35:05.966714   20924 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:35:05.966778   20924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:35:05.984626   20924 api_server.go:72] duration metric: took 10.930847996s to wait for apiserver process to appear ...
	I0416 16:35:05.984650   20924 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:35:05.984670   20924 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0416 16:35:05.989259   20924 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0416 16:35:05.989311   20924 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0416 16:35:05.989317   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:05.989325   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:05.989335   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:05.990425   20924 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 16:35:05.990525   20924 api_server.go:141] control plane version: v1.29.3
	I0416 16:35:05.990545   20924 api_server.go:131] duration metric: took 5.888134ms to wait for apiserver health ...
	I0416 16:35:05.990553   20924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:35:06.160915   20924 request.go:629] Waited for 170.294501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.160999   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.161005   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.161012   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.161016   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.167104   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:35:06.173151   20924 system_pods.go:59] 17 kube-system pods found
	I0416 16:35:06.173206   20924 system_pods.go:61] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:35:06.173213   20924 system_pods.go:61] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:35:06.173217   20924 system_pods.go:61] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:35:06.173221   20924 system_pods.go:61] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:35:06.173224   20924 system_pods.go:61] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:35:06.173227   20924 system_pods.go:61] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:35:06.173230   20924 system_pods.go:61] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:35:06.173233   20924 system_pods.go:61] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:35:06.173236   20924 system_pods.go:61] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:35:06.173240   20924 system_pods.go:61] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:35:06.173244   20924 system_pods.go:61] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:35:06.173247   20924 system_pods.go:61] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:35:06.173254   20924 system_pods.go:61] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:35:06.173257   20924 system_pods.go:61] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:35:06.173259   20924 system_pods.go:61] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:35:06.173264   20924 system_pods.go:61] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:35:06.173268   20924 system_pods.go:61] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:35:06.173274   20924 system_pods.go:74] duration metric: took 182.71198ms to wait for pod list to return data ...
	I0416 16:35:06.173289   20924 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:35:06.361743   20924 request.go:629] Waited for 188.371258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:35:06.361797   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:35:06.361802   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.361809   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.361813   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.365591   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:06.365841   20924 default_sa.go:45] found service account: "default"
	I0416 16:35:06.365861   20924 default_sa.go:55] duration metric: took 192.566887ms for default service account to be created ...
	I0416 16:35:06.365868   20924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:35:06.561305   20924 request.go:629] Waited for 195.367623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.561369   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:35:06.561374   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.561382   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.561387   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.568778   20924 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 16:35:06.574146   20924 system_pods.go:86] 17 kube-system pods found
	I0416 16:35:06.574172   20924 system_pods.go:89] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:35:06.574177   20924 system_pods.go:89] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:35:06.574182   20924 system_pods.go:89] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:35:06.574186   20924 system_pods.go:89] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:35:06.574189   20924 system_pods.go:89] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:35:06.574193   20924 system_pods.go:89] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:35:06.574200   20924 system_pods.go:89] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:35:06.574205   20924 system_pods.go:89] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:35:06.574209   20924 system_pods.go:89] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:35:06.574213   20924 system_pods.go:89] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:35:06.574217   20924 system_pods.go:89] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:35:06.574221   20924 system_pods.go:89] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:35:06.574224   20924 system_pods.go:89] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:35:06.574228   20924 system_pods.go:89] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:35:06.574232   20924 system_pods.go:89] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:35:06.574236   20924 system_pods.go:89] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:35:06.574239   20924 system_pods.go:89] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:35:06.574245   20924 system_pods.go:126] duration metric: took 208.372151ms to wait for k8s-apps to be running ...
	I0416 16:35:06.574257   20924 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:35:06.574302   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:35:06.591603   20924 system_svc.go:56] duration metric: took 17.33744ms WaitForService to wait for kubelet
	I0416 16:35:06.591632   20924 kubeadm.go:576] duration metric: took 11.537857616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:35:06.591652   20924 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:35:06.760823   20924 request.go:629] Waited for 169.101079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0416 16:35:06.760909   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0416 16:35:06.760916   20924 round_trippers.go:469] Request Headers:
	I0416 16:35:06.760927   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:35:06.760937   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:35:06.764601   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:35:06.765687   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:35:06.765709   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:35:06.765720   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:35:06.765723   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:35:06.765727   20924 node_conditions.go:105] duration metric: took 174.071725ms to run NodePressure ...
	I0416 16:35:06.765742   20924 start.go:240] waiting for startup goroutines ...
	I0416 16:35:06.765765   20924 start.go:254] writing updated cluster config ...
	I0416 16:35:06.767826   20924 out.go:177] 
	I0416 16:35:06.769387   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:35:06.769504   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:35:06.771152   20924 out.go:177] * Starting "ha-543552-m03" control-plane node in "ha-543552" cluster
	I0416 16:35:06.772343   20924 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:35:06.772363   20924 cache.go:56] Caching tarball of preloaded images
	I0416 16:35:06.772438   20924 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:35:06.772449   20924 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:35:06.772533   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:35:06.772687   20924 start.go:360] acquireMachinesLock for ha-543552-m03: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:35:06.772724   20924 start.go:364] duration metric: took 20.458µs to acquireMachinesLock for "ha-543552-m03"
	I0416 16:35:06.772745   20924 start.go:93] Provisioning new machine with config: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:35:06.772833   20924 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0416 16:35:06.774391   20924 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 16:35:06.774473   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:06.774516   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:06.789258   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0416 16:35:06.789719   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:06.790194   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:06.790212   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:06.790510   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:06.790729   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:06.790882   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:06.791021   20924 start.go:159] libmachine.API.Create for "ha-543552" (driver="kvm2")
	I0416 16:35:06.791052   20924 client.go:168] LocalClient.Create starting
	I0416 16:35:06.791084   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 16:35:06.791132   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:35:06.791152   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:35:06.791210   20924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 16:35:06.791237   20924 main.go:141] libmachine: Decoding PEM data...
	I0416 16:35:06.791254   20924 main.go:141] libmachine: Parsing certificate...
	I0416 16:35:06.791281   20924 main.go:141] libmachine: Running pre-create checks...
	I0416 16:35:06.791292   20924 main.go:141] libmachine: (ha-543552-m03) Calling .PreCreateCheck
	I0416 16:35:06.791451   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetConfigRaw
	I0416 16:35:06.791774   20924 main.go:141] libmachine: Creating machine...
	I0416 16:35:06.791788   20924 main.go:141] libmachine: (ha-543552-m03) Calling .Create
	I0416 16:35:06.791910   20924 main.go:141] libmachine: (ha-543552-m03) Creating KVM machine...
	I0416 16:35:06.793102   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found existing default KVM network
	I0416 16:35:06.793243   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found existing private KVM network mk-ha-543552
	I0416 16:35:06.793408   20924 main.go:141] libmachine: (ha-543552-m03) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03 ...
	I0416 16:35:06.793436   20924 main.go:141] libmachine: (ha-543552-m03) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:35:06.793470   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:06.793372   21709 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:35:06.793549   20924 main.go:141] libmachine: (ha-543552-m03) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 16:35:07.001488   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:07.001360   21709 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa...
	I0416 16:35:07.314320   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:07.314215   21709 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/ha-543552-m03.rawdisk...
	I0416 16:35:07.314362   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Writing magic tar header
	I0416 16:35:07.314374   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Writing SSH key tar header
	I0416 16:35:07.314382   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:07.314323   21709 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03 ...
	I0416 16:35:07.314441   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03
	I0416 16:35:07.314459   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03 (perms=drwx------)
	I0416 16:35:07.314466   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 16:35:07.314502   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 16:35:07.314532   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:35:07.314544   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 16:35:07.314557   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 16:35:07.314566   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 16:35:07.314576   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 16:35:07.314592   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 16:35:07.314603   20924 main.go:141] libmachine: (ha-543552-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 16:35:07.314613   20924 main.go:141] libmachine: (ha-543552-m03) Creating domain...
	I0416 16:35:07.314622   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home/jenkins
	I0416 16:35:07.314631   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Checking permissions on dir: /home
	I0416 16:35:07.314642   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Skipping /home - not owner
	I0416 16:35:07.315483   20924 main.go:141] libmachine: (ha-543552-m03) define libvirt domain using xml: 
	I0416 16:35:07.315505   20924 main.go:141] libmachine: (ha-543552-m03) <domain type='kvm'>
	I0416 16:35:07.315515   20924 main.go:141] libmachine: (ha-543552-m03)   <name>ha-543552-m03</name>
	I0416 16:35:07.315530   20924 main.go:141] libmachine: (ha-543552-m03)   <memory unit='MiB'>2200</memory>
	I0416 16:35:07.315540   20924 main.go:141] libmachine: (ha-543552-m03)   <vcpu>2</vcpu>
	I0416 16:35:07.315551   20924 main.go:141] libmachine: (ha-543552-m03)   <features>
	I0416 16:35:07.315563   20924 main.go:141] libmachine: (ha-543552-m03)     <acpi/>
	I0416 16:35:07.315573   20924 main.go:141] libmachine: (ha-543552-m03)     <apic/>
	I0416 16:35:07.315586   20924 main.go:141] libmachine: (ha-543552-m03)     <pae/>
	I0416 16:35:07.315596   20924 main.go:141] libmachine: (ha-543552-m03)     
	I0416 16:35:07.315605   20924 main.go:141] libmachine: (ha-543552-m03)   </features>
	I0416 16:35:07.315619   20924 main.go:141] libmachine: (ha-543552-m03)   <cpu mode='host-passthrough'>
	I0416 16:35:07.315626   20924 main.go:141] libmachine: (ha-543552-m03)   
	I0416 16:35:07.315635   20924 main.go:141] libmachine: (ha-543552-m03)   </cpu>
	I0416 16:35:07.315643   20924 main.go:141] libmachine: (ha-543552-m03)   <os>
	I0416 16:35:07.315655   20924 main.go:141] libmachine: (ha-543552-m03)     <type>hvm</type>
	I0416 16:35:07.315667   20924 main.go:141] libmachine: (ha-543552-m03)     <boot dev='cdrom'/>
	I0416 16:35:07.315676   20924 main.go:141] libmachine: (ha-543552-m03)     <boot dev='hd'/>
	I0416 16:35:07.315692   20924 main.go:141] libmachine: (ha-543552-m03)     <bootmenu enable='no'/>
	I0416 16:35:07.315701   20924 main.go:141] libmachine: (ha-543552-m03)   </os>
	I0416 16:35:07.315725   20924 main.go:141] libmachine: (ha-543552-m03)   <devices>
	I0416 16:35:07.315744   20924 main.go:141] libmachine: (ha-543552-m03)     <disk type='file' device='cdrom'>
	I0416 16:35:07.315776   20924 main.go:141] libmachine: (ha-543552-m03)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/boot2docker.iso'/>
	I0416 16:35:07.315799   20924 main.go:141] libmachine: (ha-543552-m03)       <target dev='hdc' bus='scsi'/>
	I0416 16:35:07.315815   20924 main.go:141] libmachine: (ha-543552-m03)       <readonly/>
	I0416 16:35:07.315833   20924 main.go:141] libmachine: (ha-543552-m03)     </disk>
	I0416 16:35:07.315851   20924 main.go:141] libmachine: (ha-543552-m03)     <disk type='file' device='disk'>
	I0416 16:35:07.315865   20924 main.go:141] libmachine: (ha-543552-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 16:35:07.315882   20924 main.go:141] libmachine: (ha-543552-m03)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/ha-543552-m03.rawdisk'/>
	I0416 16:35:07.315893   20924 main.go:141] libmachine: (ha-543552-m03)       <target dev='hda' bus='virtio'/>
	I0416 16:35:07.315904   20924 main.go:141] libmachine: (ha-543552-m03)     </disk>
	I0416 16:35:07.315917   20924 main.go:141] libmachine: (ha-543552-m03)     <interface type='network'>
	I0416 16:35:07.315929   20924 main.go:141] libmachine: (ha-543552-m03)       <source network='mk-ha-543552'/>
	I0416 16:35:07.315940   20924 main.go:141] libmachine: (ha-543552-m03)       <model type='virtio'/>
	I0416 16:35:07.315948   20924 main.go:141] libmachine: (ha-543552-m03)     </interface>
	I0416 16:35:07.315959   20924 main.go:141] libmachine: (ha-543552-m03)     <interface type='network'>
	I0416 16:35:07.315971   20924 main.go:141] libmachine: (ha-543552-m03)       <source network='default'/>
	I0416 16:35:07.315982   20924 main.go:141] libmachine: (ha-543552-m03)       <model type='virtio'/>
	I0416 16:35:07.315998   20924 main.go:141] libmachine: (ha-543552-m03)     </interface>
	I0416 16:35:07.316016   20924 main.go:141] libmachine: (ha-543552-m03)     <serial type='pty'>
	I0416 16:35:07.316028   20924 main.go:141] libmachine: (ha-543552-m03)       <target port='0'/>
	I0416 16:35:07.316037   20924 main.go:141] libmachine: (ha-543552-m03)     </serial>
	I0416 16:35:07.316050   20924 main.go:141] libmachine: (ha-543552-m03)     <console type='pty'>
	I0416 16:35:07.316063   20924 main.go:141] libmachine: (ha-543552-m03)       <target type='serial' port='0'/>
	I0416 16:35:07.316077   20924 main.go:141] libmachine: (ha-543552-m03)     </console>
	I0416 16:35:07.316092   20924 main.go:141] libmachine: (ha-543552-m03)     <rng model='virtio'>
	I0416 16:35:07.316106   20924 main.go:141] libmachine: (ha-543552-m03)       <backend model='random'>/dev/random</backend>
	I0416 16:35:07.316117   20924 main.go:141] libmachine: (ha-543552-m03)     </rng>
	I0416 16:35:07.316126   20924 main.go:141] libmachine: (ha-543552-m03)     
	I0416 16:35:07.316133   20924 main.go:141] libmachine: (ha-543552-m03)     
	I0416 16:35:07.316149   20924 main.go:141] libmachine: (ha-543552-m03)   </devices>
	I0416 16:35:07.316164   20924 main.go:141] libmachine: (ha-543552-m03) </domain>
	I0416 16:35:07.316179   20924 main.go:141] libmachine: (ha-543552-m03) 
	I0416 16:35:07.322334   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:34:f6:92 in network default
	I0416 16:35:07.322901   20924 main.go:141] libmachine: (ha-543552-m03) Ensuring networks are active...
	I0416 16:35:07.322922   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:07.323595   20924 main.go:141] libmachine: (ha-543552-m03) Ensuring network default is active
	I0416 16:35:07.323918   20924 main.go:141] libmachine: (ha-543552-m03) Ensuring network mk-ha-543552 is active
	I0416 16:35:07.324382   20924 main.go:141] libmachine: (ha-543552-m03) Getting domain xml...
	I0416 16:35:07.325048   20924 main.go:141] libmachine: (ha-543552-m03) Creating domain...
	I0416 16:35:08.531141   20924 main.go:141] libmachine: (ha-543552-m03) Waiting to get IP...
	I0416 16:35:08.531828   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:08.532253   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:08.532281   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:08.532227   21709 retry.go:31] will retry after 294.77499ms: waiting for machine to come up
	I0416 16:35:08.828811   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:08.829251   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:08.829281   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:08.829200   21709 retry.go:31] will retry after 297.816737ms: waiting for machine to come up
	I0416 16:35:09.128910   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:09.129461   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:09.129493   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:09.129418   21709 retry.go:31] will retry after 477.127226ms: waiting for machine to come up
	I0416 16:35:09.607949   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:09.608418   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:09.608442   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:09.608357   21709 retry.go:31] will retry after 456.349369ms: waiting for machine to come up
	I0416 16:35:10.065854   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:10.066365   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:10.066396   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:10.066325   21709 retry.go:31] will retry after 561.879222ms: waiting for machine to come up
	I0416 16:35:10.629994   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:10.630413   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:10.630429   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:10.630371   21709 retry.go:31] will retry after 726.3447ms: waiting for machine to come up
	I0416 16:35:11.357873   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:11.358350   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:11.358380   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:11.358296   21709 retry.go:31] will retry after 797.57283ms: waiting for machine to come up
	I0416 16:35:12.157789   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:12.158306   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:12.158346   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:12.158269   21709 retry.go:31] will retry after 1.434488181s: waiting for machine to come up
	I0416 16:35:13.594778   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:13.595213   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:13.595242   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:13.595160   21709 retry.go:31] will retry after 1.748054995s: waiting for machine to come up
	I0416 16:35:15.346754   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:15.347223   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:15.347251   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:15.347170   21709 retry.go:31] will retry after 1.738692519s: waiting for machine to come up
	I0416 16:35:17.087361   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:17.087832   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:17.087860   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:17.087777   21709 retry.go:31] will retry after 1.747698931s: waiting for machine to come up
	I0416 16:35:18.837831   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:18.838296   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:18.838316   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:18.838267   21709 retry.go:31] will retry after 3.508870725s: waiting for machine to come up
	I0416 16:35:22.349123   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:22.349525   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:22.349557   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:22.349485   21709 retry.go:31] will retry after 3.956653373s: waiting for machine to come up
	I0416 16:35:26.309866   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:26.310253   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find current IP address of domain ha-543552-m03 in network mk-ha-543552
	I0416 16:35:26.310274   20924 main.go:141] libmachine: (ha-543552-m03) DBG | I0416 16:35:26.310209   21709 retry.go:31] will retry after 5.115453223s: waiting for machine to come up
	I0416 16:35:31.429812   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.430299   20924 main.go:141] libmachine: (ha-543552-m03) Found IP for machine: 192.168.39.125
	I0416 16:35:31.430322   20924 main.go:141] libmachine: (ha-543552-m03) Reserving static IP address...
	I0416 16:35:31.430338   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has current primary IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.430716   20924 main.go:141] libmachine: (ha-543552-m03) DBG | unable to find host DHCP lease matching {name: "ha-543552-m03", mac: "52:54:00:f9:15:9d", ip: "192.168.39.125"} in network mk-ha-543552
	I0416 16:35:31.501459   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Getting to WaitForSSH function...
	I0416 16:35:31.501495   20924 main.go:141] libmachine: (ha-543552-m03) Reserved static IP address: 192.168.39.125
	I0416 16:35:31.501541   20924 main.go:141] libmachine: (ha-543552-m03) Waiting for SSH to be available...
	I0416 16:35:31.504105   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.504608   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.504638   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.504779   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Using SSH client type: external
	I0416 16:35:31.504808   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa (-rw-------)
	I0416 16:35:31.504854   20924 main.go:141] libmachine: (ha-543552-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 16:35:31.504868   20924 main.go:141] libmachine: (ha-543552-m03) DBG | About to run SSH command:
	I0416 16:35:31.504879   20924 main.go:141] libmachine: (ha-543552-m03) DBG | exit 0
	I0416 16:35:31.628917   20924 main.go:141] libmachine: (ha-543552-m03) DBG | SSH cmd err, output: <nil>: 
	I0416 16:35:31.629177   20924 main.go:141] libmachine: (ha-543552-m03) KVM machine creation complete!
	I0416 16:35:31.629557   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetConfigRaw
	I0416 16:35:31.630120   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:31.630327   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:31.630485   20924 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 16:35:31.630501   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:35:31.631760   20924 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 16:35:31.631775   20924 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 16:35:31.631793   20924 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 16:35:31.631804   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.634109   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.634494   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.634514   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.634686   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.634845   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.635017   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.635163   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.635311   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.635489   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.635506   20924 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 16:35:31.736455   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:35:31.736498   20924 main.go:141] libmachine: Detecting the provisioner...
	I0416 16:35:31.736510   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.739233   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.739547   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.739580   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.739706   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.739908   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.740065   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.740209   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.740338   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.740510   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.740524   20924 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 16:35:31.842072   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 16:35:31.842154   20924 main.go:141] libmachine: found compatible host: buildroot
	I0416 16:35:31.842165   20924 main.go:141] libmachine: Provisioning with buildroot...
	I0416 16:35:31.842172   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:31.842444   20924 buildroot.go:166] provisioning hostname "ha-543552-m03"
	I0416 16:35:31.842474   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:31.842651   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.845282   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.845687   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.845716   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.845873   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.846059   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.846189   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.846334   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.846545   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.846750   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.846769   20924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552-m03 && echo "ha-543552-m03" | sudo tee /etc/hostname
	I0416 16:35:31.968895   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552-m03
	
	I0416 16:35:31.968920   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:31.971726   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.972138   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:31.972161   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:31.972393   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:31.972542   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.972721   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:31.972885   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:31.973036   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:31.973192   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:31.973205   20924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:35:32.086601   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:35:32.086629   20924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:35:32.086646   20924 buildroot.go:174] setting up certificates
	I0416 16:35:32.086656   20924 provision.go:84] configureAuth start
	I0416 16:35:32.086668   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetMachineName
	I0416 16:35:32.086899   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:32.089858   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.090257   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.090290   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.090427   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.092569   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.092881   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.092923   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.093070   20924 provision.go:143] copyHostCerts
	I0416 16:35:32.093103   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:35:32.093142   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:35:32.093154   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:35:32.093233   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:35:32.093325   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:35:32.093351   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:35:32.093360   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:35:32.093395   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:35:32.093452   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:35:32.093473   20924 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:35:32.093483   20924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:35:32.093517   20924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:35:32.093581   20924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552-m03 san=[127.0.0.1 192.168.39.125 ha-543552-m03 localhost minikube]
	I0416 16:35:32.312980   20924 provision.go:177] copyRemoteCerts
	I0416 16:35:32.313038   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:35:32.313061   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.315541   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.315899   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.315928   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.316156   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.316374   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.316572   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.316716   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:32.396258   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:35:32.396328   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:35:32.427516   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:35:32.427588   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0416 16:35:32.458091   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:35:32.458148   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:35:32.484758   20924 provision.go:87] duration metric: took 398.089807ms to configureAuth
	I0416 16:35:32.484792   20924 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:35:32.485049   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:35:32.485143   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.487937   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.488322   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.488350   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.488560   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.488782   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.488945   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.489071   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.489242   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:32.489419   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:32.489434   20924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:35:32.782848   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:35:32.782876   20924 main.go:141] libmachine: Checking connection to Docker...
	I0416 16:35:32.782886   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetURL
	I0416 16:35:32.784332   20924 main.go:141] libmachine: (ha-543552-m03) DBG | Using libvirt version 6000000
	I0416 16:35:32.786671   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.787017   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.787044   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.787203   20924 main.go:141] libmachine: Docker is up and running!
	I0416 16:35:32.787214   20924 main.go:141] libmachine: Reticulating splines...
	I0416 16:35:32.787221   20924 client.go:171] duration metric: took 25.996158862s to LocalClient.Create
	I0416 16:35:32.787247   20924 start.go:167] duration metric: took 25.996226949s to libmachine.API.Create "ha-543552"
	I0416 16:35:32.787259   20924 start.go:293] postStartSetup for "ha-543552-m03" (driver="kvm2")
	I0416 16:35:32.787286   20924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:35:32.787315   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:32.787560   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:35:32.787590   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.789792   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.790137   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.790167   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.790275   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.790470   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.790628   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.790773   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:32.877265   20924 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:35:32.882431   20924 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:35:32.882454   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:35:32.882521   20924 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:35:32.882609   20924 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:35:32.882619   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:35:32.882717   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:35:32.893123   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:35:32.919872   20924 start.go:296] duration metric: took 132.598201ms for postStartSetup
	I0416 16:35:32.919914   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetConfigRaw
	I0416 16:35:32.920543   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:32.923242   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.923656   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.923685   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.923955   20924 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:35:32.924129   20924 start.go:128] duration metric: took 26.151272358s to createHost
	I0416 16:35:32.924151   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:32.926252   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.926604   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:32.926625   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:32.926763   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:32.926922   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.927056   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:32.927177   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:32.927336   20924 main.go:141] libmachine: Using SSH client type: native
	I0416 16:35:32.927524   20924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0416 16:35:32.927539   20924 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:35:33.034270   20924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285333.016350836
	
	I0416 16:35:33.034294   20924 fix.go:216] guest clock: 1713285333.016350836
	I0416 16:35:33.034303   20924 fix.go:229] Guest: 2024-04-16 16:35:33.016350836 +0000 UTC Remote: 2024-04-16 16:35:32.924141423 +0000 UTC m=+155.159321005 (delta=92.209413ms)
	I0416 16:35:33.034322   20924 fix.go:200] guest clock delta is within tolerance: 92.209413ms
	I0416 16:35:33.034330   20924 start.go:83] releasing machines lock for "ha-543552-m03", held for 26.261595405s
	I0416 16:35:33.034351   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.034592   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:33.037469   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.037861   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:33.037892   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.040362   20924 out.go:177] * Found network options:
	I0416 16:35:33.042010   20924 out.go:177]   - NO_PROXY=192.168.39.97,192.168.39.80
	W0416 16:35:33.043447   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:35:33.043468   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:35:33.043479   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.044104   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.044320   20924 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:35:33.044424   20924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:35:33.044460   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	W0416 16:35:33.044561   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	W0416 16:35:33.044586   20924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0416 16:35:33.044637   20924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:35:33.044656   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:35:33.047283   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047311   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047676   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:33.047707   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047734   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:33.047773   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:33.047849   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:33.048018   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:35:33.048030   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:33.048171   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:35:33.048182   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:33.048364   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:35:33.048386   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:33.048490   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:35:33.285718   20924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:35:33.292682   20924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:35:33.292747   20924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:35:33.313993   20924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 16:35:33.314023   20924 start.go:494] detecting cgroup driver to use...
	I0416 16:35:33.314090   20924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:35:33.333251   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:35:33.350424   20924 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:35:33.350487   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:35:33.367096   20924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:35:33.384913   20924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:35:33.517807   20924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:35:33.690549   20924 docker.go:233] disabling docker service ...
	I0416 16:35:33.690627   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:35:33.707499   20924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:35:33.723438   20924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:35:33.873524   20924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:35:34.005516   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:35:34.020928   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:35:34.043005   20924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:35:34.043060   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.055243   20924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:35:34.055300   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.067675   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.079574   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.092467   20924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:35:34.105409   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.118622   20924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.138420   20924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:35:34.150657   20924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:35:34.163490   20924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 16:35:34.163536   20924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 16:35:34.181661   20924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:35:34.192504   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:35:34.322279   20924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:35:34.479653   20924 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:35:34.479739   20924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:35:34.485425   20924 start.go:562] Will wait 60s for crictl version
	I0416 16:35:34.485474   20924 ssh_runner.go:195] Run: which crictl
	I0416 16:35:34.490001   20924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:35:34.529434   20924 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:35:34.529520   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:35:34.559314   20924 ssh_runner.go:195] Run: crio --version
	I0416 16:35:34.591656   20924 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:35:34.593127   20924 out.go:177]   - env NO_PROXY=192.168.39.97
	I0416 16:35:34.594502   20924 out.go:177]   - env NO_PROXY=192.168.39.97,192.168.39.80
	I0416 16:35:34.595864   20924 main.go:141] libmachine: (ha-543552-m03) Calling .GetIP
	I0416 16:35:34.598190   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:34.598537   20924 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:35:34.598566   20924 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:35:34.598738   20924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:35:34.603546   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:35:34.617461   20924 mustload.go:65] Loading cluster: ha-543552
	I0416 16:35:34.617672   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:35:34.617935   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:34.617980   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:34.634091   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0416 16:35:34.634551   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:34.634992   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:34.635013   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:34.635348   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:34.635513   20924 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:35:34.637213   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:35:34.637490   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:34.637533   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:34.651768   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0416 16:35:34.652134   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:34.652545   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:34.652571   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:34.652878   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:34.653080   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:35:34.653257   20924 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.125
	I0416 16:35:34.653269   20924 certs.go:194] generating shared ca certs ...
	I0416 16:35:34.653281   20924 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:35:34.653395   20924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:35:34.653431   20924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:35:34.653437   20924 certs.go:256] generating profile certs ...
	I0416 16:35:34.653498   20924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:35:34.653523   20924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae
	I0416 16:35:34.653534   20924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.80 192.168.39.125 192.168.39.254]
	I0416 16:35:34.709574   20924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae ...
	I0416 16:35:34.709603   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae: {Name:mk072cdc0acef413d22b7ef1edd66a15ddb0f40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:35:34.709752   20924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae ...
	I0416 16:35:34.709763   20924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae: {Name:mkd18b9c565f69ea2235df7b592a2ec9e969d15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:35:34.709865   20924 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.6b20b4ae -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:35:34.709996   20924 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.6b20b4ae -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:35:34.710111   20924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:35:34.710132   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:35:34.710143   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:35:34.710156   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:35:34.710169   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:35:34.710181   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:35:34.710194   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:35:34.710205   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:35:34.710218   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:35:34.710269   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:35:34.710297   20924 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:35:34.710304   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:35:34.710322   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:35:34.710355   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:35:34.710378   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:35:34.710411   20924 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:35:34.710439   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:35:34.710453   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:34.710465   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:35:34.710494   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:35:34.713066   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:34.713450   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:35:34.713492   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:34.713672   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:35:34.713932   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:35:34.714090   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:35:34.714245   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:35:34.789207   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0416 16:35:34.795016   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0416 16:35:34.810524   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0416 16:35:34.816281   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0416 16:35:34.831128   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0416 16:35:34.835672   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0416 16:35:34.848291   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0416 16:35:34.853159   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0416 16:35:34.867179   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0416 16:35:34.872551   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0416 16:35:34.887141   20924 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0416 16:35:34.892181   20924 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0416 16:35:34.903756   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:35:34.932246   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:35:34.962462   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:35:34.990572   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:35:35.021502   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0416 16:35:35.054403   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:35:35.100964   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:35:35.129372   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:35:35.157612   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:35:35.185342   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:35:35.215312   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:35:35.243164   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0416 16:35:35.261925   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0416 16:35:35.281377   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0416 16:35:35.299714   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0416 16:35:35.317583   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0416 16:35:35.336376   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0416 16:35:35.357747   20924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0416 16:35:35.376246   20924 ssh_runner.go:195] Run: openssl version
	I0416 16:35:35.382518   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:35:35.394859   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:35:35.399721   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:35:35.399768   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:35:35.406048   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:35:35.418418   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:35:35.430377   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:35.435138   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:35.435174   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:35:35.441152   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:35:35.453110   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:35:35.465021   20924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:35:35.470001   20924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:35:35.470051   20924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:35:35.476854   20924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:35:35.489349   20924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:35:35.494043   20924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 16:35:35.494094   20924 kubeadm.go:928] updating node {m03 192.168.39.125 8443 v1.29.3 crio true true} ...
	I0416 16:35:35.494168   20924 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:35:35.494191   20924 kube-vip.go:111] generating kube-vip config ...
	I0416 16:35:35.494218   20924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:35:35.516064   20924 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:35:35.516138   20924 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:35:35.516183   20924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:35:35.528565   20924 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0416 16:35:35.528629   20924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0416 16:35:35.539578   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0416 16:35:35.539601   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0416 16:35:35.539604   20924 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0416 16:35:35.539627   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:35:35.539645   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:35:35.539687   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0416 16:35:35.539603   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:35:35.539780   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0416 16:35:35.561322   20924 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:35:35.561339   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0416 16:35:35.561372   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0416 16:35:35.561410   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0416 16:35:35.561434   20924 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0416 16:35:35.561435   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0416 16:35:35.608684   20924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0416 16:35:35.608738   20924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0416 16:35:36.667671   20924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0416 16:35:36.679249   20924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0416 16:35:36.698558   20924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:35:36.719019   20924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 16:35:36.739131   20924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:35:36.744488   20924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 16:35:36.758661   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:35:36.895492   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:35:36.917404   20924 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:35:36.917748   20924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:35:36.917798   20924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:35:36.933238   20924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0416 16:35:36.933754   20924 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:35:36.934866   20924 main.go:141] libmachine: Using API Version  1
	I0416 16:35:36.934898   20924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:35:36.935262   20924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:35:36.935493   20924 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:35:36.935673   20924 start.go:316] joinCluster: &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:35:36.935878   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0416 16:35:36.935900   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:35:36.939484   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:36.939956   20924 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:35:36.939983   20924 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:35:36.940145   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:35:36.940439   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:35:36.940648   20924 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:35:36.940833   20924 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:35:37.128696   20924 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:35:37.128753   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s7zeen.uafa4z2skhbmlwz6 --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m03 --control-plane --apiserver-advertise-address=192.168.39.125 --apiserver-bind-port=8443"
	I0416 16:36:04.483809   20924 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token s7zeen.uafa4z2skhbmlwz6 --discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-543552-m03 --control-plane --apiserver-advertise-address=192.168.39.125 --apiserver-bind-port=8443": (27.355027173s)
	I0416 16:36:04.483863   20924 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0416 16:36:05.288728   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-543552-m03 minikube.k8s.io/updated_at=2024_04_16T16_36_05_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=ha-543552 minikube.k8s.io/primary=false
	I0416 16:36:05.449498   20924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-543552-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0416 16:36:05.587389   20924 start.go:318] duration metric: took 28.651723514s to joinCluster
	I0416 16:36:05.587463   20924 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 16:36:05.590020   20924 out.go:177] * Verifying Kubernetes components...
	I0416 16:36:05.587773   20924 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:36:05.591461   20924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:36:05.987479   20924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:36:06.122102   20924 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:36:06.122434   20924 kapi.go:59] client config for ha-543552: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0416 16:36:06.122527   20924 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.97:8443
	I0416 16:36:06.122803   20924 node_ready.go:35] waiting up to 6m0s for node "ha-543552-m03" to be "Ready" ...
	I0416 16:36:06.122910   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:06.122923   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:06.122934   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:06.122943   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:06.127150   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:06.623812   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:06.623845   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:06.623856   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:06.623862   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:06.628160   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:07.122994   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:07.123018   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:07.123026   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:07.123030   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:07.126714   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:07.624041   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:07.624068   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:07.624079   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:07.624086   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:07.627483   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:08.123097   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:08.123123   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:08.123134   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:08.123139   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:08.127073   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:08.127729   20924 node_ready.go:53] node "ha-543552-m03" has status "Ready":"False"
	I0416 16:36:08.623070   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:08.623091   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:08.623099   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:08.623104   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:08.626939   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.122983   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.123007   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.123015   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.123020   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.127281   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:09.127989   20924 node_ready.go:49] node "ha-543552-m03" has status "Ready":"True"
	I0416 16:36:09.128008   20924 node_ready.go:38] duration metric: took 3.005185285s for node "ha-543552-m03" to be "Ready" ...
	I0416 16:36:09.128016   20924 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:36:09.128073   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:09.128085   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.128096   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.128102   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.135478   20924 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0416 16:36:09.144960   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.145046   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-k7bn7
	I0416 16:36:09.145058   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.145068   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.145076   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.149788   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:09.150463   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:09.150477   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.150485   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.150490   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.153659   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.154420   20924 pod_ready.go:92] pod "coredns-76f75df574-k7bn7" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.154436   20924 pod_ready.go:81] duration metric: took 9.447894ms for pod "coredns-76f75df574-k7bn7" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.154446   20924 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.154506   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-l9zck
	I0416 16:36:09.154517   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.154527   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.154533   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.158503   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.159531   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:09.159545   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.159553   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.159558   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.162209   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.162925   20924 pod_ready.go:92] pod "coredns-76f75df574-l9zck" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.162943   20924 pod_ready.go:81] duration metric: took 8.48929ms for pod "coredns-76f75df574-l9zck" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.162953   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.163004   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552
	I0416 16:36:09.163014   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.163024   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.163029   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.165608   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.166064   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:09.166079   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.166088   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.166093   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.168339   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.168814   20924 pod_ready.go:92] pod "etcd-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.168829   20924 pod_ready.go:81] duration metric: took 5.869427ms for pod "etcd-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.168849   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.168931   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m02
	I0416 16:36:09.168944   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.168955   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.168964   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.171585   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.172130   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:09.172146   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.172154   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.172160   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.174820   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:09.175478   20924 pod_ready.go:92] pod "etcd-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:09.175498   20924 pod_ready.go:81] duration metric: took 6.639989ms for pod "etcd-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.175508   20924 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:09.323858   20924 request.go:629] Waited for 148.299942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:09.323950   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:09.323962   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.323973   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.323980   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.329019   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:09.523142   20924 request.go:629] Waited for 193.311389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.523208   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.523227   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.523236   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.523242   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.527249   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.723322   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:09.723342   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.723350   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.723354   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.727240   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:09.923336   20924 request.go:629] Waited for 195.327557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.923397   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:09.923402   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:09.923409   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:09.923413   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:09.927899   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:10.176106   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:10.176131   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.176139   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.176143   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.181585   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:10.323951   20924 request.go:629] Waited for 141.287059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:10.324000   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:10.324005   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.324011   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.324015   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.327902   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:10.675975   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:10.675999   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.676011   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.676017   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.680040   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:10.723378   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:10.723399   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:10.723407   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:10.723425   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:10.726926   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:11.175783   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:11.175808   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.175823   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.175829   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.179579   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:11.180495   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:11.180514   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.180523   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.180530   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.184004   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:11.184755   20924 pod_ready.go:102] pod "etcd-ha-543552-m03" in "kube-system" namespace has status "Ready":"False"
	I0416 16:36:11.676103   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:11.676130   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.676139   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.676144   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.681891   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:11.683079   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:11.683107   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:11.683114   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:11.683121   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:11.687544   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:12.176254   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:12.176279   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.176286   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.176291   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.180308   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:12.181152   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:12.181170   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.181180   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.181187   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.186396   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:12.675749   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/etcd-ha-543552-m03
	I0416 16:36:12.675772   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.675780   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.675783   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.679620   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:12.680994   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:12.681011   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.681025   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.681030   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.684055   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:12.684803   20924 pod_ready.go:92] pod "etcd-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:12.684821   20924 pod_ready.go:81] duration metric: took 3.509304679s for pod "etcd-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.684858   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.684921   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552
	I0416 16:36:12.684933   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.684943   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.684954   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.687766   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:12.723396   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:12.723427   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.723436   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.723440   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.727021   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:12.727895   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:12.727915   20924 pod_ready.go:81] duration metric: took 43.047665ms for pod "kube-apiserver-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.727923   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:12.923045   20924 request.go:629] Waited for 195.069258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m02
	I0416 16:36:12.923115   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m02
	I0416 16:36:12.923121   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:12.923133   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:12.923141   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:12.926403   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.123571   20924 request.go:629] Waited for 196.280829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:13.123651   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:13.123656   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.123663   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.123669   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.127684   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:13.128350   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:13.128372   20924 pod_ready.go:81] duration metric: took 400.441626ms for pod "kube-apiserver-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:13.128384   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:13.323549   20924 request.go:629] Waited for 195.098361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.323632   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.323655   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.323683   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.323693   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.327672   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.523029   20924 request.go:629] Waited for 194.288109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.523079   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.523084   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.523090   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.523094   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.526484   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.723363   20924 request.go:629] Waited for 94.257303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.723436   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:13.723443   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.723452   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.723457   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.727320   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:13.923595   20924 request.go:629] Waited for 195.168474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.923671   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:13.923683   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:13.923694   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:13.923706   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:13.927592   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:14.129278   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:14.129298   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.129305   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.129308   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.133946   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:14.323736   20924 request.go:629] Waited for 189.02048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.323790   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.323796   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.323803   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.323810   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.327507   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:14.629362   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:14.629388   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.629399   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.629406   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.632777   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:14.723546   20924 request.go:629] Waited for 89.51166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.723596   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:14.723602   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:14.723610   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:14.723619   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:14.727872   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:15.129300   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:15.129327   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.129334   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.129338   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.132883   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:15.133980   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:15.133992   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.134002   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.134005   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.137163   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:15.137850   20924 pod_ready.go:102] pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace has status "Ready":"False"
	I0416 16:36:15.628670   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:15.628693   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.628704   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.628711   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.632329   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:15.633285   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:15.633307   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:15.633317   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:15.633323   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:15.636004   20924 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0416 16:36:16.129516   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-543552-m03
	I0416 16:36:16.129536   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.129543   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.129548   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.133656   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:16.134363   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:16.134379   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.134387   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.134390   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.137704   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.138279   20924 pod_ready.go:92] pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:16.138299   20924 pod_ready.go:81] duration metric: took 3.00990684s for pod "kube-apiserver-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.138308   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.138361   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552
	I0416 16:36:16.138375   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.138385   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.138397   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.141484   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.323665   20924 request.go:629] Waited for 181.35503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:16.323757   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:16.323766   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.323775   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.323782   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.327198   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.327855   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:16.327884   20924 pod_ready.go:81] duration metric: took 189.565043ms for pod "kube-controller-manager-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.327897   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.523348   20924 request.go:629] Waited for 195.380108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:36:16.523419   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m02
	I0416 16:36:16.523424   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.523431   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.523435   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.527155   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:16.723535   20924 request.go:629] Waited for 195.402961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:16.723598   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:16.723603   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.723622   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.723639   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.728059   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:16.728678   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:16.728701   20924 pod_ready.go:81] duration metric: took 400.794948ms for pod "kube-controller-manager-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.728713   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:16.924003   20924 request.go:629] Waited for 195.211261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m03
	I0416 16:36:16.924064   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-543552-m03
	I0416 16:36:16.924071   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:16.924081   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:16.924095   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:16.927848   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:17.123301   20924 request.go:629] Waited for 194.363347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.123354   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.123359   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.123366   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.123370   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.128474   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:17.129179   20924 pod_ready.go:92] pod "kube-controller-manager-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:17.129206   20924 pod_ready.go:81] duration metric: took 400.480248ms for pod "kube-controller-manager-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.129216   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.323361   20924 request.go:629] Waited for 194.081395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:36:17.323501   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vkts
	I0416 16:36:17.323514   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.323523   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.323529   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.329145   20924 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0416 16:36:17.523635   20924 request.go:629] Waited for 193.373878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:17.523684   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:17.523689   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.523695   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.523700   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.528716   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:17.530195   20924 pod_ready.go:92] pod "kube-proxy-2vkts" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:17.530213   20924 pod_ready.go:81] duration metric: took 400.991105ms for pod "kube-proxy-2vkts" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.530221   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ncrw" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.723453   20924 request.go:629] Waited for 193.159148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ncrw
	I0416 16:36:17.723517   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ncrw
	I0416 16:36:17.723522   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.723529   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.723534   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.727525   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:17.923833   20924 request.go:629] Waited for 195.411309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.923912   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:17.923918   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:17.923928   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:17.923933   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:17.927566   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:17.928504   20924 pod_ready.go:92] pod "kube-proxy-9ncrw" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:17.928521   20924 pod_ready.go:81] duration metric: took 398.294345ms for pod "kube-proxy-9ncrw" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:17.928532   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.123866   20924 request.go:629] Waited for 195.243048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:36:18.124004   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9lhc
	I0416 16:36:18.124029   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.124041   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.124049   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.134748   20924 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0416 16:36:18.323043   20924 request.go:629] Waited for 187.276686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.323097   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.323104   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.323114   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.323120   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.329580   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:36:18.330975   20924 pod_ready.go:92] pod "kube-proxy-c9lhc" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:18.330993   20924 pod_ready.go:81] duration metric: took 402.454383ms for pod "kube-proxy-c9lhc" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.331002   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.523032   20924 request.go:629] Waited for 191.95579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:36:18.523084   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552
	I0416 16:36:18.523089   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.523101   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.523105   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.526867   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:18.723964   20924 request.go:629] Waited for 196.356109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.724034   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552
	I0416 16:36:18.724039   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.724046   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.724051   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.727800   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:18.728600   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:18.728628   20924 pod_ready.go:81] duration metric: took 397.620125ms for pod "kube-scheduler-ha-543552" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.728638   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:18.923696   20924 request.go:629] Waited for 194.996162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:36:18.923756   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m02
	I0416 16:36:18.923761   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:18.923768   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:18.923772   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:18.927792   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:19.123889   20924 request.go:629] Waited for 195.353903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:19.123940   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m02
	I0416 16:36:19.123946   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.123952   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.123956   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.129981   20924 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0416 16:36:19.130969   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:19.130986   20924 pod_ready.go:81] duration metric: took 402.341731ms for pod "kube-scheduler-ha-543552-m02" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:19.130999   20924 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:19.323070   20924 request.go:629] Waited for 191.983625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m03
	I0416 16:36:19.323159   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-543552-m03
	I0416 16:36:19.323168   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.323175   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.323179   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.327476   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:19.523769   20924 request.go:629] Waited for 195.362415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:19.523872   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes/ha-543552-m03
	I0416 16:36:19.523884   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.523895   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.523902   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.527429   20924 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0416 16:36:19.528335   20924 pod_ready.go:92] pod "kube-scheduler-ha-543552-m03" in "kube-system" namespace has status "Ready":"True"
	I0416 16:36:19.528353   20924 pod_ready.go:81] duration metric: took 397.346043ms for pod "kube-scheduler-ha-543552-m03" in "kube-system" namespace to be "Ready" ...
	I0416 16:36:19.528363   20924 pod_ready.go:38] duration metric: took 10.400339257s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 16:36:19.528376   20924 api_server.go:52] waiting for apiserver process to appear ...
	I0416 16:36:19.528419   20924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:36:19.547602   20924 api_server.go:72] duration metric: took 13.960104549s to wait for apiserver process to appear ...
	I0416 16:36:19.547624   20924 api_server.go:88] waiting for apiserver healthz status ...
	I0416 16:36:19.547651   20924 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0416 16:36:19.554523   20924 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0416 16:36:19.554582   20924 round_trippers.go:463] GET https://192.168.39.97:8443/version
	I0416 16:36:19.554592   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.554602   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.554611   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.555911   20924 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0416 16:36:19.555971   20924 api_server.go:141] control plane version: v1.29.3
	I0416 16:36:19.555991   20924 api_server.go:131] duration metric: took 8.353386ms to wait for apiserver health ...
	I0416 16:36:19.555997   20924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 16:36:19.723982   20924 request.go:629] Waited for 167.93243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:19.724063   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:19.724079   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.724088   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.724098   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.733319   20924 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0416 16:36:19.739866   20924 system_pods.go:59] 24 kube-system pods found
	I0416 16:36:19.739897   20924 system_pods.go:61] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:36:19.739903   20924 system_pods.go:61] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:36:19.739906   20924 system_pods.go:61] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:36:19.739909   20924 system_pods.go:61] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:36:19.739912   20924 system_pods.go:61] "etcd-ha-543552-m03" [6634160f-7d48-4458-8628-2b3f340d8810] Running
	I0416 16:36:19.739915   20924 system_pods.go:61] "kindnet-6wbkm" [1aa2a9c0-7c95-49ca-817d-1dfaaff56145] Running
	I0416 16:36:19.739918   20924 system_pods.go:61] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:36:19.739922   20924 system_pods.go:61] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:36:19.739926   20924 system_pods.go:61] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:36:19.739931   20924 system_pods.go:61] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:36:19.739939   20924 system_pods.go:61] "kube-apiserver-ha-543552-m03" [e20ae43c-f3ac-45fc-a7ac-2b193c0e4a59] Running
	I0416 16:36:19.739945   20924 system_pods.go:61] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:36:19.739957   20924 system_pods.go:61] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:36:19.739962   20924 system_pods.go:61] "kube-controller-manager-ha-543552-m03" [779ae963-1dfb-4d6e-bf23-c49a60880bdd] Running
	I0416 16:36:19.739968   20924 system_pods.go:61] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:36:19.739977   20924 system_pods.go:61] "kube-proxy-9ncrw" [7c22a15b-35f1-4a08-b5ad-889f7d14706c] Running
	I0416 16:36:19.739982   20924 system_pods.go:61] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:36:19.739987   20924 system_pods.go:61] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:36:19.739992   20924 system_pods.go:61] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:36:19.739997   20924 system_pods.go:61] "kube-scheduler-ha-543552-m03" [4b562a1e-9bba-4208-b04d-a0dbee0c9e7e] Running
	I0416 16:36:19.740002   20924 system_pods.go:61] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:36:19.740006   20924 system_pods.go:61] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:36:19.740011   20924 system_pods.go:61] "kube-vip-ha-543552-m03" [cca4c658-0439-4cef-b7f9-b8cc2b66a222] Running
	I0416 16:36:19.740016   20924 system_pods.go:61] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:36:19.740024   20924 system_pods.go:74] duration metric: took 184.020561ms to wait for pod list to return data ...
	I0416 16:36:19.740035   20924 default_sa.go:34] waiting for default service account to be created ...
	I0416 16:36:19.923429   20924 request.go:629] Waited for 183.326312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:36:19.923500   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/default/serviceaccounts
	I0416 16:36:19.923505   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:19.923513   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:19.923516   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:19.927571   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:19.927759   20924 default_sa.go:45] found service account: "default"
	I0416 16:36:19.927781   20924 default_sa.go:55] duration metric: took 187.737838ms for default service account to be created ...
	I0416 16:36:19.927790   20924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 16:36:20.123346   20924 request.go:629] Waited for 195.490445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:20.123407   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/namespaces/kube-system/pods
	I0416 16:36:20.123412   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:20.123419   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:20.123424   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:20.132410   20924 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0416 16:36:20.139891   20924 system_pods.go:86] 24 kube-system pods found
	I0416 16:36:20.139917   20924 system_pods.go:89] "coredns-76f75df574-k7bn7" [8f45a7f4-5779-49ad-949c-29fe8ad7d485] Running
	I0416 16:36:20.139923   20924 system_pods.go:89] "coredns-76f75df574-l9zck" [4f0d01cc-4c32-4953-88ec-f07e72666894] Running
	I0416 16:36:20.139927   20924 system_pods.go:89] "etcd-ha-543552" [e0b55a81-bfa4-4ba4-adde-69d72d728240] Running
	I0416 16:36:20.139931   20924 system_pods.go:89] "etcd-ha-543552-m02" [79a7bdf2-6297-434f-afde-dcee38a7f4b6] Running
	I0416 16:36:20.139936   20924 system_pods.go:89] "etcd-ha-543552-m03" [6634160f-7d48-4458-8628-2b3f340d8810] Running
	I0416 16:36:20.139940   20924 system_pods.go:89] "kindnet-6wbkm" [1aa2a9c0-7c95-49ca-817d-1dfaaff56145] Running
	I0416 16:36:20.139945   20924 system_pods.go:89] "kindnet-7hwtp" [f54400cd-4ab3-4e00-b741-e1419d1b3b66] Running
	I0416 16:36:20.139948   20924 system_pods.go:89] "kindnet-q4275" [2f65c59e-1e69-402a-af3a-2c28f7783c9f] Running
	I0416 16:36:20.139952   20924 system_pods.go:89] "kube-apiserver-ha-543552" [4010eca2-0d2e-46c1-9c8f-59961c27c3bf] Running
	I0416 16:36:20.139956   20924 system_pods.go:89] "kube-apiserver-ha-543552-m02" [f2e26e25-fb61-4754-a98b-1c0235c2907f] Running
	I0416 16:36:20.139960   20924 system_pods.go:89] "kube-apiserver-ha-543552-m03" [e20ae43c-f3ac-45fc-a7ac-2b193c0e4a59] Running
	I0416 16:36:20.139965   20924 system_pods.go:89] "kube-controller-manager-ha-543552" [9aa3103c-1ada-4947-84cb-c6d6c80274f0] Running
	I0416 16:36:20.139972   20924 system_pods.go:89] "kube-controller-manager-ha-543552-m02" [d0cfc02d-baa6-4c39-960a-c94989f7f545] Running
	I0416 16:36:20.139976   20924 system_pods.go:89] "kube-controller-manager-ha-543552-m03" [779ae963-1dfb-4d6e-bf23-c49a60880bdd] Running
	I0416 16:36:20.139982   20924 system_pods.go:89] "kube-proxy-2vkts" [4d33f122-fdc5-47ef-abd8-1e3074401db9] Running
	I0416 16:36:20.139986   20924 system_pods.go:89] "kube-proxy-9ncrw" [7c22a15b-35f1-4a08-b5ad-889f7d14706c] Running
	I0416 16:36:20.139992   20924 system_pods.go:89] "kube-proxy-c9lhc" [b8027952-1449-42c9-9bea-14aa1eb113aa] Running
	I0416 16:36:20.139996   20924 system_pods.go:89] "kube-scheduler-ha-543552" [644f8507-38cf-41d2-8c3a-cf1d2817bcff] Running
	I0416 16:36:20.140002   20924 system_pods.go:89] "kube-scheduler-ha-543552-m02" [06bfa48f-a357-4c0b-a36d-fd9802387211] Running
	I0416 16:36:20.140006   20924 system_pods.go:89] "kube-scheduler-ha-543552-m03" [4b562a1e-9bba-4208-b04d-a0dbee0c9e7e] Running
	I0416 16:36:20.140013   20924 system_pods.go:89] "kube-vip-ha-543552" [73f7261f-431b-4d66-9567-cd65dafbf212] Running
	I0416 16:36:20.140016   20924 system_pods.go:89] "kube-vip-ha-543552-m02" [315f50da-9df3-47a5-a88f-72857a417304] Running
	I0416 16:36:20.140022   20924 system_pods.go:89] "kube-vip-ha-543552-m03" [cca4c658-0439-4cef-b7f9-b8cc2b66a222] Running
	I0416 16:36:20.140025   20924 system_pods.go:89] "storage-provisioner" [663f4c76-01f8-4664-9345-740540fdc41c] Running
	I0416 16:36:20.140035   20924 system_pods.go:126] duration metric: took 212.238596ms to wait for k8s-apps to be running ...
	I0416 16:36:20.140044   20924 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 16:36:20.140087   20924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:36:20.158413   20924 system_svc.go:56] duration metric: took 18.358997ms WaitForService to wait for kubelet
	I0416 16:36:20.158453   20924 kubeadm.go:576] duration metric: took 14.570948499s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:36:20.158476   20924 node_conditions.go:102] verifying NodePressure condition ...
	I0416 16:36:20.322977   20924 request.go:629] Waited for 164.434484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.97:8443/api/v1/nodes
	I0416 16:36:20.323048   20924 round_trippers.go:463] GET https://192.168.39.97:8443/api/v1/nodes
	I0416 16:36:20.323053   20924 round_trippers.go:469] Request Headers:
	I0416 16:36:20.323061   20924 round_trippers.go:473]     Accept: application/json, */*
	I0416 16:36:20.323068   20924 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0416 16:36:20.327426   20924 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0416 16:36:20.328773   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:36:20.328798   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:36:20.328811   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:36:20.328816   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:36:20.328819   20924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 16:36:20.328823   20924 node_conditions.go:123] node cpu capacity is 2
	I0416 16:36:20.328826   20924 node_conditions.go:105] duration metric: took 170.345289ms to run NodePressure ...
	I0416 16:36:20.328853   20924 start.go:240] waiting for startup goroutines ...
	I0416 16:36:20.328880   20924 start.go:254] writing updated cluster config ...
	I0416 16:36:20.329168   20924 ssh_runner.go:195] Run: rm -f paused
	I0416 16:36:20.385164   20924 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 16:36:20.387211   20924 out.go:177] * Done! kubectl is now configured to use "ha-543552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.518526706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50166b44-2a9f-4a41-94c2-985d86fb7631 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.518900407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50166b44-2a9f-4a41-94c2-985d86fb7631 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.567643457Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a643491-a0a6-4d5b-bbd5-971695a017fa name=/runtime.v1.RuntimeService/Version
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.567745877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a643491-a0a6-4d5b-bbd5-971695a017fa name=/runtime.v1.RuntimeService/Version
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.569757216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ce68578-f30c-448f-877d-fda4699beb96 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.570282827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713285641570259216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ce68578-f30c-448f-877d-fda4699beb96 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.570837468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68d9dd27-12d8-4f03-9de6-8ff8eebb3440 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.570919742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68d9dd27-12d8-4f03-9de6-8ff8eebb3440 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.571248634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68d9dd27-12d8-4f03-9de6-8ff8eebb3440 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.583644137Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=c2390ac5-f89b-44c9-80d7-7a04fa6e70e3 name=/runtime.v1.RuntimeService/Status
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.583704969Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c2390ac5-f89b-44c9-80d7-7a04fa6e70e3 name=/runtime.v1.RuntimeService/Status
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.618823569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be4b65f8-92b8-4628-aa81-2f09000cf514 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.618925945Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be4b65f8-92b8-4628-aa81-2f09000cf514 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.620517748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c263c4f8-6e2f-414f-bd01-2d4c1bc9854b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.621041460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713285641620938390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c263c4f8-6e2f-414f-bd01-2d4c1bc9854b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.621548778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5986fd95-c457-4b26-bee2-905f6c9e7844 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.621654615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5986fd95-c457-4b26-bee2-905f6c9e7844 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.622033486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5986fd95-c457-4b26-bee2-905f6c9e7844 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.670829706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b749edb0-b7d2-430d-b0e9-3e49c0de3da8 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.670911027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b749edb0-b7d2-430d-b0e9-3e49c0de3da8 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.672408230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13caf480-ffb6-4859-8cab-e722c25329cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.672837499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713285641672815101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13caf480-ffb6-4859-8cab-e722c25329cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.673407073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa8c045b-446b-46b4-bf98-af4cbb037bb1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.673463547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa8c045b-446b-46b4-bf98-af4cbb037bb1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:40:41 ha-543552 crio[680]: time="2024-04-16 16:40:41.673699455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285382937724249,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238764737007,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0c5cf1df494c2d059ee58deebee8c2fba0939877bf3482df66d2bae402ca39f,PodSandboxId:a709b139696349b04b29d63dc2d87b74725a76db992e41b053e2926bae539aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713285238725833599,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285238689850011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5
779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555,PodSandboxId:2b6c3518676ac2f2f09ec1eb2e69aee774a63dd5df2ad01707839c9aaf7c79dc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285
236621935463,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285236321624687,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c,PodSandboxId:d742d545e022a16a6d58e4e0a84f9df2ad19bce1a6257f78b8e4ee0c64c35593,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285216603865573,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 827abfbff9325d32b15386c2e6a23718,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285214233623633,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e,PodSandboxId:564e47e5a81fc6c1648c94a0a3ef7412ebd65f1802fe59d6cca488dadb41377b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285214266364806,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285214183872384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sch
eduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88,PodSandboxId:bbd97783ca669efb2cf652170e0abe2712537ce963e1f0c32b14010beadad122,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285214153135328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa8c045b-446b-46b4-bf98-af4cbb037bb1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4eff3ed28c1a6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   0a4cbed3518bb       busybox-7fdf7869d9-zmcc2
	a326689cf68a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7d0e2bbea0507       coredns-76f75df574-l9zck
	e0c5cf1df494c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a709b13969634       storage-provisioner
	e82d4c4b6df66       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   3c0b61b8ba2ff       coredns-76f75df574-k7bn7
	c2c331bf17fe8       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   2b6c3518676ac       kindnet-7hwtp
	697fe1db84b5d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      6 minutes ago       Running             kube-proxy                0                   016912d243f9d       kube-proxy-c9lhc
	b4d4b03694327       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   d742d545e022a       kube-vip-ha-543552
	495afba1f7549       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago       Running             kube-controller-manager   0                   564e47e5a81fc       kube-controller-manager-ha-543552
	ce9f179d540bc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   f5aa5ed306340       etcd-ha-543552
	5f7d02aab74a8       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago       Running             kube-scheduler            0                   158c5349515db       kube-scheduler-ha-543552
	80fb22fd3cc49       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago       Running             kube-apiserver            0                   bbd97783ca669       kube-apiserver-ha-543552
	
	
	==> coredns [a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324] <==
	[INFO] 10.244.0.4:59922 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000511841s
	[INFO] 10.244.2.2:48182 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004454115s
	[INFO] 10.244.2.2:44194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000236966s
	[INFO] 10.244.2.2:39038 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163174s
	[INFO] 10.244.2.2:42477 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002852142s
	[INFO] 10.244.2.2:47206 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189393s
	[INFO] 10.244.1.2:55215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000293483s
	[INFO] 10.244.1.2:55166 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111209s
	[INFO] 10.244.1.2:36437 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001400626s
	[INFO] 10.244.1.2:38888 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185603s
	[INFO] 10.244.0.4:46391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104951s
	[INFO] 10.244.0.4:59290 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001608985s
	[INFO] 10.244.0.4:39400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075172s
	[INFO] 10.244.2.2:50417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152413s
	[INFO] 10.244.2.2:51697 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216701s
	[INFO] 10.244.2.2:46301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158413s
	[INFO] 10.244.1.2:58450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001388s
	[INFO] 10.244.1.2:43346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108795s
	[INFO] 10.244.0.4:44420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074923s
	[INFO] 10.244.0.4:51452 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107645s
	[INFO] 10.244.2.2:44963 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121222s
	[INFO] 10.244.2.2:46302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00020113s
	[INFO] 10.244.2.2:51995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000170275s
	[INFO] 10.244.0.4:40157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126298s
	[INFO] 10.244.0.4:54438 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176652s
	
	
	==> coredns [e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108] <==
	[INFO] 10.244.0.4:49242 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001984411s
	[INFO] 10.244.2.2:34467 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272048s
	[INFO] 10.244.2.2:45332 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000229408s
	[INFO] 10.244.2.2:36963 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170135s
	[INFO] 10.244.1.2:42830 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002119141s
	[INFO] 10.244.1.2:44539 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228353s
	[INFO] 10.244.1.2:42961 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000595811s
	[INFO] 10.244.1.2:46668 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010375s
	[INFO] 10.244.0.4:42508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000269602s
	[INFO] 10.244.0.4:33007 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001845252s
	[INFO] 10.244.0.4:45175 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124293s
	[INFO] 10.244.0.4:37034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123057s
	[INFO] 10.244.0.4:56706 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077781s
	[INFO] 10.244.2.2:48795 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014109s
	[INFO] 10.244.1.2:60733 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013497s
	[INFO] 10.244.1.2:47606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137564s
	[INFO] 10.244.0.4:43266 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102784s
	[INFO] 10.244.0.4:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161303s
	[INFO] 10.244.2.2:35260 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000298984s
	[INFO] 10.244.1.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119878s
	[INFO] 10.244.1.2:44462 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168252s
	[INFO] 10.244.1.2:50323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147657s
	[INFO] 10.244.1.2:51016 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131163s
	[INFO] 10.244.0.4:50260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114104s
	[INFO] 10.244.0.4:37053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068482s
	
	
	==> describe nodes <==
	Name:               ha-543552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_33_41_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:33:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:40:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:36:45 +0000   Tue, 16 Apr 2024 16:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-543552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dd8560d23a945a5aa6d3b02a2c3dc1b
	  System UUID:                6dd8560d-23a9-45a5-aa6d-3b02a2c3dc1b
	  Boot ID:                    7c97db37-f0b9-4406-9537-1480d467974d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zmcc2             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 coredns-76f75df574-k7bn7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m48s
	  kube-system                 coredns-76f75df574-l9zck             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m48s
	  kube-system                 etcd-ha-543552                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m1s
	  kube-system                 kindnet-7hwtp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m48s
	  kube-system                 kube-apiserver-ha-543552             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 kube-controller-manager-ha-543552    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 kube-proxy-c9lhc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m48s
	  kube-system                 kube-scheduler-ha-543552             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 kube-vip-ha-543552                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m45s  kube-proxy       
	  Normal  Starting                 7m1s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m1s   kubelet          Node ha-543552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m1s   kubelet          Node ha-543552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m1s   kubelet          Node ha-543552 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m48s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal  NodeReady                6m44s  kubelet          Node ha-543552 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal  RegisteredNode           4m24s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	
	
	Name:               ha-543552-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_34_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:34:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:37:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 16 Apr 2024 16:36:52 +0000   Tue, 16 Apr 2024 16:38:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-543552-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2f4c6e70b7c46048863edfff3e863df
	  System UUID:                e2f4c6e7-0b7c-4604-8863-edfff3e863df
	  Boot ID:                    c70dbd0c-349c-4713-a6b1-4fa48198aed0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7wbjg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-ha-543552-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m51s
	  kube-system                 kindnet-q4275                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m52s
	  kube-system                 kube-apiserver-ha-543552-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-controller-manager-ha-543552-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-proxy-2vkts                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  kube-system                 kube-scheduler-ha-543552-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-vip-ha-543552-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m52s (x8 over 5m52s)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x8 over 5m52s)  kubelet          Node ha-543552-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x7 over 5m52s)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m48s                  node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  NodeNotReady             2m38s                  node-controller  Node ha-543552-m02 status is now: NodeNotReady
	
	
	Name:               ha-543552-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_36_05_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:40:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:35:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:35:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:35:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:36:29 +0000   Tue, 16 Apr 2024 16:36:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    ha-543552-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 affc17c9d3664ffba11e272d96fa3d10
	  System UUID:                affc17c9-d366-4ffb-a11e-272d96fa3d10
	  Boot ID:                    42171959-bc11-46c0-9578-af565ce67aa6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2prpr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-ha-543552-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m42s
	  kube-system                 kindnet-6wbkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m43s
	  kube-system                 kube-apiserver-ha-543552-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-ha-543552-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-proxy-9ncrw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-scheduler-ha-543552-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-vip-ha-543552-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m43s)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m43s)  kubelet          Node ha-543552-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m43s)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	
	
	Name:               ha-543552-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_36_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:36:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:40:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:36:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:36:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:36:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:37:28 +0000   Tue, 16 Apr 2024 16:37:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-543552-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f46fde69f5e74ab18cd1001a10200bfb
	  System UUID:                f46fde69-f5e7-4ab1-8cd1-001a10200bfb
	  Boot ID:                    99a101a4-1c3b-4821-84ee-6c1ffce7c674
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hghz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m38s
	  kube-system                 kube-proxy-g5pqm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m45s (x2 over 3m45s)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x2 over 3m45s)  kubelet          Node ha-543552-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x2 over 3m45s)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node ha-543552-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr16 16:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051391] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043432] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.624403] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.493869] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.688655] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.068457] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.060006] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073697] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.185591] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.154095] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.315435] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.805735] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.066066] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.494086] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.897359] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.972784] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.095897] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.136469] kauditd_printk_skb: 21 callbacks suppressed
	[Apr16 16:34] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1] <==
	{"level":"warn","ts":"2024-04-16T16:40:41.99773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.020341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.036773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.044752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.048538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.05238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.060447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.068076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.069302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.079203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.089379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.098134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.114936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.124936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.135146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.139004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.143891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.151495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.158645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.16667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.170674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.179115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.180512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.244562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T16:40:42.247042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f61fae125a956d36","from":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 16:40:42 up 7 min,  0 users,  load average: 0.85, 0.44, 0.20
	Linux ha-543552 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c2c331bf17fe89e8d4f215c5d991cddb9b1d88844ad9fc0e17d1d2968d494555] <==
	I0416 16:40:08.290468       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:40:18.305919       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:40:18.306020       1 main.go:227] handling current node
	I0416 16:40:18.306038       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:40:18.306048       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:40:18.306178       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:40:18.306212       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:40:18.306270       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:40:18.306281       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:40:28.315496       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:40:28.315599       1 main.go:227] handling current node
	I0416 16:40:28.315623       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:40:28.315641       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:40:28.315775       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:40:28.315808       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:40:28.315860       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:40:28.315878       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:40:38.331834       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:40:38.332222       1 main.go:227] handling current node
	I0416 16:40:38.332361       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:40:38.332443       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:40:38.332666       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:40:38.332729       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:40:38.332868       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:40:38.332899       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88] <==
	I0416 16:33:37.777459       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:33:37.781749       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 16:33:37.781771       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:33:37.784462       1 controller.go:624] quota admission added evaluator for: namespaces
	I0416 16:33:37.786691       1 cache.go:39] Caches are synced for autoregister controller
	E0416 16:33:37.787068       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0416 16:33:38.026573       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:33:38.588283       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0416 16:33:38.592876       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 16:33:38.592933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 16:33:39.226354       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 16:33:39.274469       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 16:33:39.415784       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 16:33:39.424885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.97]
	I0416 16:33:39.425765       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:33:39.430379       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 16:33:39.632050       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 16:33:41.220584       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 16:33:41.242589       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 16:33:41.252415       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 16:33:54.144612       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0416 16:33:54.181370       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 16:36:59.248633       1 trace.go:236] Trace[1774261545]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3f2c4d34-3af7-4df0-a83b-fdc32a1eed32,client:192.168.39.126,api-group:,api-version:v1,name:kube-proxy-tskwl,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-tskwl,user-agent:kubelet/v1.29.3 (linux/amd64) kubernetes/6813625,verb:DELETE (16-Apr-2024 16:36:58.735) (total time: 513ms):
	Trace[1774261545]: ---"Object deleted from database" 315ms (16:36:59.248)
	Trace[1774261545]: [513.338309ms] [513.338309ms] END
	
	
	==> kube-controller-manager [495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e] <==
	I0416 16:36:58.153399       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-mlxgv"
	I0416 16:36:59.194930       1 event.go:376] "Event occurred" object="ha-543552-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller"
	I0416 16:36:59.435527       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-543552-m04"
	I0416 16:36:59.700609       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zqhwm"
	I0416 16:36:59.857558       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-zqhwm"
	I0416 16:36:59.886282       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-s5k75"
	I0416 16:37:02.208878       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k52cr"
	I0416 16:37:02.334831       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-lvsz7"
	I0416 16:37:02.354321       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-bg4d8"
	I0416 16:37:04.214159       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fp4tj"
	I0416 16:37:04.302710       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-clv7c"
	I0416 16:37:04.302777       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-fp4tj"
	I0416 16:37:08.253630       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-543552-m04"
	I0416 16:38:04.473730       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-543552-m04"
	I0416 16:38:04.477180       1 event.go:376] "Event occurred" object="ha-543552-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-543552-m02 status is now: NodeNotReady"
	I0416 16:38:04.496231       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.516013       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.534379       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.557115       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.574568       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-7wbjg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.599131       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-2vkts" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.623732       1 event.go:376] "Event occurred" object="kube-system/kindnet-q4275" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.651853       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-543552-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 16:38:04.669393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.435225ms"
	I0416 16:38:04.669657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.164µs"
	
	
	==> kube-proxy [697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18] <==
	I0416 16:33:56.602032       1 server_others.go:72] "Using iptables proxy"
	I0416 16:33:56.640935       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0416 16:33:56.707637       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:33:56.707703       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:33:56.707720       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:33:56.712410       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:33:56.713718       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:33:56.713785       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:33:56.721082       1 config.go:188] "Starting service config controller"
	I0416 16:33:56.721372       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:33:56.721448       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:33:56.721522       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:33:56.723915       1 config.go:315] "Starting node config controller"
	I0416 16:33:56.725460       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 16:33:56.822614       1 shared_informer.go:318] Caches are synced for service config
	I0416 16:33:56.822738       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:33:56.825934       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9] <==
	W0416 16:33:38.946529       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 16:33:38.946586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0416 16:33:41.203343       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 16:36:57.928047       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s25jv\": pod kindnet-s25jv is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s25jv" node="ha-543552-m04"
	E0416 16:36:57.928562       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 3c985da9-dade-474a-ab1f-75843d9b0fd6(kube-system/kindnet-s25jv) wasn't assumed so cannot be forgotten"
	E0416 16:36:57.928749       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s25jv\": pod kindnet-s25jv is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-s25jv"
	I0416 16:36:57.928829       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s25jv" node="ha-543552-m04"
	E0416 16:36:57.929174       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g5pqm\": pod kube-proxy-g5pqm is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g5pqm" node="ha-543552-m04"
	E0416 16:36:57.929301       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod ffb4dcbe-b292-4915-b82b-c71e58f6de69(kube-system/kube-proxy-g5pqm) wasn't assumed so cannot be forgotten"
	E0416 16:36:57.929334       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g5pqm\": pod kube-proxy-g5pqm is already assigned to node \"ha-543552-m04\"" pod="kube-system/kube-proxy-g5pqm"
	I0416 16:36:57.929348       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-g5pqm" node="ha-543552-m04"
	E0416 16:36:58.057395       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mlxgv\": pod kube-proxy-mlxgv is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mlxgv" node="ha-543552-m04"
	E0416 16:36:58.057719       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mlxgv\": pod kube-proxy-mlxgv is already assigned to node \"ha-543552-m04\"" pod="kube-system/kube-proxy-mlxgv"
	E0416 16:36:59.730620       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ntsjq\": pod kindnet-ntsjq is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ntsjq" node="ha-543552-m04"
	E0416 16:36:59.730711       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod a3055093-18f1-4a2c-80e2-4d5809d6628e(kube-system/kindnet-ntsjq) wasn't assumed so cannot be forgotten"
	E0416 16:36:59.730751       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ntsjq\": pod kindnet-ntsjq is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-ntsjq"
	I0416 16:36:59.730773       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ntsjq" node="ha-543552-m04"
	E0416 16:36:59.735334       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s5k75\": pod kindnet-s5k75 is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s5k75" node="ha-543552-m04"
	E0416 16:36:59.735423       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 3050441c-9f24-42fe-83c1-883f4c9ffc17(kube-system/kindnet-s5k75) wasn't assumed so cannot be forgotten"
	E0416 16:36:59.735455       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s5k75\": pod kindnet-s5k75 is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-s5k75"
	I0416 16:36:59.735480       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s5k75" node="ha-543552-m04"
	E0416 16:37:02.233861       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k52cr\": pod kindnet-k52cr is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k52cr" node="ha-543552-m04"
	E0416 16:37:02.236277       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 862e02ec-536d-4056-a442-98f377da86b2(kube-system/kindnet-k52cr) wasn't assumed so cannot be forgotten"
	E0416 16:37:02.236508       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k52cr\": pod kindnet-k52cr is already assigned to node \"ha-543552-m04\"" pod="kube-system/kindnet-k52cr"
	I0416 16:37:02.236615       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k52cr" node="ha-543552-m04"
	
	
	==> kubelet <==
	Apr 16 16:36:41 ha-543552 kubelet[1371]: E0416 16:36:41.439294    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:36:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:36:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:36:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:36:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:37:41 ha-543552 kubelet[1371]: E0416 16:37:41.434470    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:37:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:37:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:37:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:37:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:38:41 ha-543552 kubelet[1371]: E0416 16:38:41.432921    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:38:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:38:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:38:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:38:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:39:41 ha-543552 kubelet[1371]: E0416 16:39:41.435149    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:39:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:39:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:39:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:39:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:40:41 ha-543552 kubelet[1371]: E0416 16:40:41.453174    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:40:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:40:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:40:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:40:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-543552 -n ha-543552
helpers_test.go:261: (dbg) Run:  kubectl --context ha-543552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (409.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-543552 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-543552 -v=7 --alsologtostderr
E0416 16:42:03.889468   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:42:10.030584   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:42:37.716126   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-543552 -v=7 --alsologtostderr: exit status 82 (2m2.706911833s)

                                                
                                                
-- stdout --
	* Stopping node "ha-543552-m04"  ...
	* Stopping node "ha-543552-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:40:43.738025   26614 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:40:43.738166   26614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:43.738176   26614 out.go:304] Setting ErrFile to fd 2...
	I0416 16:40:43.738181   26614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:40:43.738377   26614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:40:43.738613   26614 out.go:298] Setting JSON to false
	I0416 16:40:43.738724   26614 mustload.go:65] Loading cluster: ha-543552
	I0416 16:40:43.739063   26614 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:40:43.739155   26614 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:40:43.739339   26614 mustload.go:65] Loading cluster: ha-543552
	I0416 16:40:43.739469   26614 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:40:43.739493   26614 stop.go:39] StopHost: ha-543552-m04
	I0416 16:40:43.739902   26614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:43.739948   26614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:43.754325   26614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0416 16:40:43.754742   26614 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:43.755334   26614 main.go:141] libmachine: Using API Version  1
	I0416 16:40:43.755357   26614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:43.755698   26614 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:43.759274   26614 out.go:177] * Stopping node "ha-543552-m04"  ...
	I0416 16:40:43.760826   26614 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 16:40:43.760881   26614 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:40:43.761107   26614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 16:40:43.761138   26614 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:40:43.763789   26614 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:43.764191   26614 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:36:44 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:40:43.764218   26614 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:40:43.764368   26614 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:40:43.764525   26614 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:40:43.764672   26614 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:40:43.764803   26614 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:40:43.854094   26614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 16:40:43.910650   26614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 16:40:43.965649   26614 main.go:141] libmachine: Stopping "ha-543552-m04"...
	I0416 16:40:43.965672   26614 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:43.967206   26614 main.go:141] libmachine: (ha-543552-m04) Calling .Stop
	I0416 16:40:43.970527   26614 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 0/120
	I0416 16:40:44.971845   26614 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 1/120
	I0416 16:40:45.973451   26614 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:40:45.974752   26614 main.go:141] libmachine: Machine "ha-543552-m04" was stopped.
	I0416 16:40:45.974774   26614 stop.go:75] duration metric: took 2.213949588s to stop
	I0416 16:40:45.974797   26614 stop.go:39] StopHost: ha-543552-m03
	I0416 16:40:45.975102   26614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:40:45.975165   26614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:40:45.989655   26614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0416 16:40:45.990066   26614 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:40:45.990595   26614 main.go:141] libmachine: Using API Version  1
	I0416 16:40:45.990633   26614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:40:45.990946   26614 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:40:45.993122   26614 out.go:177] * Stopping node "ha-543552-m03"  ...
	I0416 16:40:45.994689   26614 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 16:40:45.994710   26614 main.go:141] libmachine: (ha-543552-m03) Calling .DriverName
	I0416 16:40:45.994928   26614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 16:40:45.994949   26614 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHHostname
	I0416 16:40:45.997805   26614 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:45.998237   26614 main.go:141] libmachine: (ha-543552-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:15:9d", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:35:23 +0000 UTC Type:0 Mac:52:54:00:f9:15:9d Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-543552-m03 Clientid:01:52:54:00:f9:15:9d}
	I0416 16:40:45.998258   26614 main.go:141] libmachine: (ha-543552-m03) DBG | domain ha-543552-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:f9:15:9d in network mk-ha-543552
	I0416 16:40:45.998444   26614 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHPort
	I0416 16:40:45.998627   26614 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHKeyPath
	I0416 16:40:45.998794   26614 main.go:141] libmachine: (ha-543552-m03) Calling .GetSSHUsername
	I0416 16:40:45.998943   26614 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m03/id_rsa Username:docker}
	I0416 16:40:46.082505   26614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 16:40:46.140175   26614 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 16:40:46.199534   26614 main.go:141] libmachine: Stopping "ha-543552-m03"...
	I0416 16:40:46.199566   26614 main.go:141] libmachine: (ha-543552-m03) Calling .GetState
	I0416 16:40:46.201169   26614 main.go:141] libmachine: (ha-543552-m03) Calling .Stop
	I0416 16:40:46.204233   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 0/120
	I0416 16:40:47.205631   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 1/120
	I0416 16:40:48.206904   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 2/120
	I0416 16:40:49.208170   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 3/120
	I0416 16:40:50.209523   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 4/120
	I0416 16:40:51.212040   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 5/120
	I0416 16:40:52.213345   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 6/120
	I0416 16:40:53.214850   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 7/120
	I0416 16:40:54.216136   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 8/120
	I0416 16:40:55.217596   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 9/120
	I0416 16:40:56.219887   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 10/120
	I0416 16:40:57.221178   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 11/120
	I0416 16:40:58.222798   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 12/120
	I0416 16:40:59.224341   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 13/120
	I0416 16:41:00.225773   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 14/120
	I0416 16:41:01.227566   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 15/120
	I0416 16:41:02.229689   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 16/120
	I0416 16:41:03.231319   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 17/120
	I0416 16:41:04.232909   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 18/120
	I0416 16:41:05.234250   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 19/120
	I0416 16:41:06.235838   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 20/120
	I0416 16:41:07.237473   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 21/120
	I0416 16:41:08.239028   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 22/120
	I0416 16:41:09.240485   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 23/120
	I0416 16:41:10.241905   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 24/120
	I0416 16:41:11.243865   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 25/120
	I0416 16:41:12.245412   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 26/120
	I0416 16:41:13.246839   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 27/120
	I0416 16:41:14.248129   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 28/120
	I0416 16:41:15.250391   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 29/120
	I0416 16:41:16.252060   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 30/120
	I0416 16:41:17.253563   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 31/120
	I0416 16:41:18.255391   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 32/120
	I0416 16:41:19.256717   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 33/120
	I0416 16:41:20.258014   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 34/120
	I0416 16:41:21.259572   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 35/120
	I0416 16:41:22.260870   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 36/120
	I0416 16:41:23.262278   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 37/120
	I0416 16:41:24.263590   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 38/120
	I0416 16:41:25.264827   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 39/120
	I0416 16:41:26.266412   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 40/120
	I0416 16:41:27.267692   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 41/120
	I0416 16:41:28.268794   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 42/120
	I0416 16:41:29.270046   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 43/120
	I0416 16:41:30.271188   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 44/120
	I0416 16:41:31.272973   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 45/120
	I0416 16:41:32.274232   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 46/120
	I0416 16:41:33.275385   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 47/120
	I0416 16:41:34.276745   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 48/120
	I0416 16:41:35.277886   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 49/120
	I0416 16:41:36.279581   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 50/120
	I0416 16:41:37.281615   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 51/120
	I0416 16:41:38.282795   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 52/120
	I0416 16:41:39.284181   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 53/120
	I0416 16:41:40.285406   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 54/120
	I0416 16:41:41.286945   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 55/120
	I0416 16:41:42.288026   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 56/120
	I0416 16:41:43.289657   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 57/120
	I0416 16:41:44.290937   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 58/120
	I0416 16:41:45.292251   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 59/120
	I0416 16:41:46.294427   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 60/120
	I0416 16:41:47.295779   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 61/120
	I0416 16:41:48.296996   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 62/120
	I0416 16:41:49.298318   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 63/120
	I0416 16:41:50.299610   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 64/120
	I0416 16:41:51.301144   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 65/120
	I0416 16:41:52.303175   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 66/120
	I0416 16:41:53.304383   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 67/120
	I0416 16:41:54.305726   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 68/120
	I0416 16:41:55.306941   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 69/120
	I0416 16:41:56.308346   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 70/120
	I0416 16:41:57.309520   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 71/120
	I0416 16:41:58.310720   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 72/120
	I0416 16:41:59.311913   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 73/120
	I0416 16:42:00.313258   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 74/120
	I0416 16:42:01.315013   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 75/120
	I0416 16:42:02.316226   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 76/120
	I0416 16:42:03.317554   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 77/120
	I0416 16:42:04.318674   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 78/120
	I0416 16:42:05.319854   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 79/120
	I0416 16:42:06.321727   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 80/120
	I0416 16:42:07.322839   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 81/120
	I0416 16:42:08.324047   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 82/120
	I0416 16:42:09.325387   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 83/120
	I0416 16:42:10.327415   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 84/120
	I0416 16:42:11.329031   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 85/120
	I0416 16:42:12.330142   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 86/120
	I0416 16:42:13.331385   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 87/120
	I0416 16:42:14.332601   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 88/120
	I0416 16:42:15.333866   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 89/120
	I0416 16:42:16.335310   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 90/120
	I0416 16:42:17.337678   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 91/120
	I0416 16:42:18.339622   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 92/120
	I0416 16:42:19.341117   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 93/120
	I0416 16:42:20.342509   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 94/120
	I0416 16:42:21.344440   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 95/120
	I0416 16:42:22.345911   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 96/120
	I0416 16:42:23.347404   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 97/120
	I0416 16:42:24.348597   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 98/120
	I0416 16:42:25.349945   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 99/120
	I0416 16:42:26.351641   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 100/120
	I0416 16:42:27.352801   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 101/120
	I0416 16:42:28.354088   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 102/120
	I0416 16:42:29.355301   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 103/120
	I0416 16:42:30.357193   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 104/120
	I0416 16:42:31.359132   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 105/120
	I0416 16:42:32.360287   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 106/120
	I0416 16:42:33.361684   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 107/120
	I0416 16:42:34.362913   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 108/120
	I0416 16:42:35.364584   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 109/120
	I0416 16:42:36.366901   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 110/120
	I0416 16:42:37.368427   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 111/120
	I0416 16:42:38.369804   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 112/120
	I0416 16:42:39.371176   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 113/120
	I0416 16:42:40.373073   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 114/120
	I0416 16:42:41.374927   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 115/120
	I0416 16:42:42.376407   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 116/120
	I0416 16:42:43.377737   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 117/120
	I0416 16:42:44.379111   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 118/120
	I0416 16:42:45.380349   26614 main.go:141] libmachine: (ha-543552-m03) Waiting for machine to stop 119/120
	I0416 16:42:46.381476   26614 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 16:42:46.381543   26614 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 16:42:46.383704   26614 out.go:177] 
	W0416 16:42:46.385353   26614 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 16:42:46.385375   26614 out.go:239] * 
	* 
	W0416 16:42:46.388135   26614 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 16:42:46.390528   26614 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-543552 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-543552 --wait=true -v=7 --alsologtostderr
E0416 16:47:03.889967   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:47:10.030699   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-543552 --wait=true -v=7 --alsologtostderr: (4m43.982301678s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-543552
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-543552 -n ha-543552
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-543552 logs -n 25: (2.132423559s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m02:/home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m04 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp testdata/cp-test.txt                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552:/home/docker/cp-test_ha-543552-m04_ha-543552.txt                       |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552 sudo cat                                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552.txt                                 |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m02:/home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03:/home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m03 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-543552 node stop m02 -v=7                                                     | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-543552 node start m02 -v=7                                                    | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-543552 -v=7                                                           | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-543552 -v=7                                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-543552 --wait=true -v=7                                                    | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:42 UTC | 16 Apr 24 16:47 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-543552                                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:42:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:42:46.447058   27095 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:42:46.447302   27095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:42:46.447311   27095 out.go:304] Setting ErrFile to fd 2...
	I0416 16:42:46.447315   27095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:42:46.447472   27095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:42:46.447981   27095 out.go:298] Setting JSON to false
	I0416 16:42:46.448825   27095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1518,"bootTime":1713284248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:42:46.448895   27095 start.go:139] virtualization: kvm guest
	I0416 16:42:46.452101   27095 out.go:177] * [ha-543552] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:42:46.453764   27095 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:42:46.453790   27095 notify.go:220] Checking for updates...
	I0416 16:42:46.455420   27095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:42:46.457087   27095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:42:46.458556   27095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:42:46.459940   27095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:42:46.461380   27095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:42:46.463064   27095 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:42:46.463174   27095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:42:46.463569   27095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:42:46.463626   27095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:42:46.478825   27095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36183
	I0416 16:42:46.479301   27095 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:42:46.479926   27095 main.go:141] libmachine: Using API Version  1
	I0416 16:42:46.479950   27095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:42:46.480354   27095 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:42:46.480517   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:42:46.514609   27095 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 16:42:46.515781   27095 start.go:297] selected driver: kvm2
	I0416 16:42:46.515793   27095 start.go:901] validating driver "kvm2" against &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:42:46.515952   27095 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:42:46.516289   27095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:42:46.516361   27095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:42:46.530554   27095 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:42:46.532268   27095 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:42:46.532316   27095 cni.go:84] Creating CNI manager for ""
	I0416 16:42:46.532322   27095 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 16:42:46.532369   27095 start.go:340] cluster config:
	{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:42:46.532512   27095 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:42:46.535045   27095 out.go:177] * Starting "ha-543552" primary control-plane node in "ha-543552" cluster
	I0416 16:42:46.536321   27095 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:42:46.536360   27095 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 16:42:46.536371   27095 cache.go:56] Caching tarball of preloaded images
	I0416 16:42:46.536453   27095 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:42:46.536465   27095 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:42:46.536571   27095 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:42:46.536766   27095 start.go:360] acquireMachinesLock for ha-543552: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:42:46.536809   27095 start.go:364] duration metric: took 23.709µs to acquireMachinesLock for "ha-543552"
	I0416 16:42:46.536825   27095 start.go:96] Skipping create...Using existing machine configuration
	I0416 16:42:46.536894   27095 fix.go:54] fixHost starting: 
	I0416 16:42:46.537168   27095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:42:46.537201   27095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:42:46.550576   27095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0416 16:42:46.551006   27095 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:42:46.551530   27095 main.go:141] libmachine: Using API Version  1
	I0416 16:42:46.551554   27095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:42:46.551881   27095 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:42:46.552113   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:42:46.552309   27095 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:42:46.553860   27095 fix.go:112] recreateIfNeeded on ha-543552: state=Running err=<nil>
	W0416 16:42:46.553899   27095 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 16:42:46.556652   27095 out.go:177] * Updating the running kvm2 "ha-543552" VM ...
	I0416 16:42:46.558030   27095 machine.go:94] provisionDockerMachine start ...
	I0416 16:42:46.558051   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:42:46.558281   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.560779   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.561257   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.561281   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.561434   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:46.561615   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.561758   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.561894   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:46.562034   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:46.562267   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:46.562285   27095 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:42:46.698478   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552
	
	I0416 16:42:46.698513   27095 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:42:46.698742   27095 buildroot.go:166] provisioning hostname "ha-543552"
	I0416 16:42:46.698771   27095 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:42:46.698972   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.701667   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.702042   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.702079   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.702200   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:46.702374   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.702545   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.702679   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:46.702854   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:46.703073   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:46.703103   27095 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552 && echo "ha-543552" | sudo tee /etc/hostname
	I0416 16:42:46.842581   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552
	
	I0416 16:42:46.842616   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.845249   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.845646   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.845674   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.845797   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:46.845985   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.846166   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.846312   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:46.846476   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:46.846650   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:46.846668   27095 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:42:46.958123   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:42:46.958150   27095 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:42:46.958187   27095 buildroot.go:174] setting up certificates
	I0416 16:42:46.958195   27095 provision.go:84] configureAuth start
	I0416 16:42:46.958203   27095 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:42:46.958562   27095 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:42:46.961300   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.961665   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.961691   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.961780   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.964088   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.964404   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.964436   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.964555   27095 provision.go:143] copyHostCerts
	I0416 16:42:46.964585   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:42:46.964634   27095 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:42:46.964655   27095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:42:46.964738   27095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:42:46.964850   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:42:46.964878   27095 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:42:46.964889   27095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:42:46.964928   27095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:42:46.964999   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:42:46.965023   27095 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:42:46.965032   27095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:42:46.965072   27095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:42:46.965156   27095 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552 san=[127.0.0.1 192.168.39.97 ha-543552 localhost minikube]
	I0416 16:42:47.089013   27095 provision.go:177] copyRemoteCerts
	I0416 16:42:47.089078   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:42:47.089103   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:47.091521   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.091970   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:47.091994   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.092209   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:47.092417   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:47.092573   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:47.092683   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:42:47.182484   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:42:47.182553   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:42:47.213899   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:42:47.213969   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:42:47.242781   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:42:47.242837   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:42:47.269636   27095 provision.go:87] duration metric: took 311.431382ms to configureAuth
	I0416 16:42:47.269661   27095 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:42:47.269886   27095 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:42:47.269960   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:47.272653   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.273050   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:47.273080   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.273284   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:47.273472   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:47.273643   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:47.273782   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:47.273942   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:47.274091   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:47.274106   27095 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:44:18.195577   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:44:18.195602   27095 machine.go:97] duration metric: took 1m31.637556524s to provisionDockerMachine
	I0416 16:44:18.195615   27095 start.go:293] postStartSetup for "ha-543552" (driver="kvm2")
	I0416 16:44:18.195626   27095 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:44:18.195652   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.196023   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:44:18.196058   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.199049   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.199487   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.199545   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.199609   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.199817   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.200003   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.200111   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:44:18.286804   27095 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:44:18.291585   27095 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:44:18.291621   27095 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:44:18.291686   27095 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:44:18.291769   27095 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:44:18.291782   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:44:18.291885   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:44:18.303191   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:44:18.330048   27095 start.go:296] duration metric: took 134.420713ms for postStartSetup
	I0416 16:44:18.330085   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.330361   27095 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0416 16:44:18.330390   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.333009   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.333592   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.333632   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.333765   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.333928   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.334079   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.334186   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	W0416 16:44:18.422063   27095 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0416 16:44:18.422088   27095 fix.go:56] duration metric: took 1m31.885254681s for fixHost
	I0416 16:44:18.422108   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.424776   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.425135   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.425163   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.425298   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.425493   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.425636   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.425794   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.425999   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:44:18.426152   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:44:18.426163   27095 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:44:18.538052   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285858.503032874
	
	I0416 16:44:18.538077   27095 fix.go:216] guest clock: 1713285858.503032874
	I0416 16:44:18.538084   27095 fix.go:229] Guest: 2024-04-16 16:44:18.503032874 +0000 UTC Remote: 2024-04-16 16:44:18.422095403 +0000 UTC m=+92.020966215 (delta=80.937471ms)
	I0416 16:44:18.538117   27095 fix.go:200] guest clock delta is within tolerance: 80.937471ms
	I0416 16:44:18.538123   27095 start.go:83] releasing machines lock for "ha-543552", held for 1m32.001303379s
	I0416 16:44:18.538146   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.538391   27095 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:44:18.541053   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.541472   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.541497   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.541680   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.542150   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.542307   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.542377   27095 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:44:18.542413   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.542594   27095 ssh_runner.go:195] Run: cat /version.json
	I0416 16:44:18.542620   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.545148   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.545365   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.545552   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.545582   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.545935   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.545993   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.546034   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.546086   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.546168   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.546237   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.546309   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.546370   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.546431   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:44:18.546464   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:44:18.656667   27095 ssh_runner.go:195] Run: systemctl --version
	I0416 16:44:18.663268   27095 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:44:18.830629   27095 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:44:18.841127   27095 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:44:18.841185   27095 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:44:18.851335   27095 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 16:44:18.851353   27095 start.go:494] detecting cgroup driver to use...
	I0416 16:44:18.851405   27095 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:44:18.869324   27095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:44:18.883599   27095 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:44:18.883648   27095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:44:18.897905   27095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:44:18.912065   27095 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:44:19.069099   27095 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:44:19.224737   27095 docker.go:233] disabling docker service ...
	I0416 16:44:19.224802   27095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:44:19.241830   27095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:44:19.258250   27095 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:44:19.413597   27095 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:44:19.569044   27095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:44:19.583698   27095 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:44:19.605543   27095 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:44:19.605594   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.616939   27095 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:44:19.616989   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.628331   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.639298   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.650604   27095 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:44:19.662984   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.673916   27095 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.687477   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.699416   27095 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:44:19.709355   27095 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:44:19.719075   27095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:44:19.871768   27095 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:44:20.247056   27095 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:44:20.247115   27095 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:44:20.253402   27095 start.go:562] Will wait 60s for crictl version
	I0416 16:44:20.253469   27095 ssh_runner.go:195] Run: which crictl
	I0416 16:44:20.258304   27095 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:44:20.307506   27095 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:44:20.307594   27095 ssh_runner.go:195] Run: crio --version
	I0416 16:44:20.341651   27095 ssh_runner.go:195] Run: crio --version
	I0416 16:44:20.375038   27095 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:44:20.376427   27095 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:44:20.379091   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:20.379551   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:20.379578   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:20.379783   27095 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:44:20.385066   27095 kubeadm.go:877] updating cluster {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:44:20.385203   27095 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:44:20.385250   27095 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:44:20.432240   27095 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 16:44:20.432260   27095 crio.go:433] Images already preloaded, skipping extraction
	I0416 16:44:20.432306   27095 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:44:20.469407   27095 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 16:44:20.469428   27095 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:44:20.469436   27095 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.29.3 crio true true} ...
	I0416 16:44:20.469515   27095 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:44:20.469574   27095 ssh_runner.go:195] Run: crio config
	I0416 16:44:20.522144   27095 cni.go:84] Creating CNI manager for ""
	I0416 16:44:20.522170   27095 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 16:44:20.522178   27095 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:44:20.522200   27095 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-543552 NodeName:ha-543552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:44:20.522322   27095 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-543552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:44:20.522341   27095 kube-vip.go:111] generating kube-vip config ...
	I0416 16:44:20.522377   27095 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:44:20.535342   27095 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:44:20.535425   27095 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:44:20.535471   27095 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:44:20.545523   27095 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:44:20.545582   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:44:20.556672   27095 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0416 16:44:20.575004   27095 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:44:20.594022   27095 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0416 16:44:20.612388   27095 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 16:44:20.632015   27095 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:44:20.636676   27095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:44:20.800163   27095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:44:20.819297   27095 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.97
	I0416 16:44:20.819318   27095 certs.go:194] generating shared ca certs ...
	I0416 16:44:20.819333   27095 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:44:20.819472   27095 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:44:20.819509   27095 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:44:20.819519   27095 certs.go:256] generating profile certs ...
	I0416 16:44:20.819596   27095 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:44:20.819621   27095 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb
	I0416 16:44:20.819633   27095 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.80 192.168.39.125 192.168.39.254]
	I0416 16:44:21.175357   27095 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb ...
	I0416 16:44:21.175385   27095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb: {Name:mk1501f25805c360dbf87b20b36f8d058b5d5d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:44:21.175539   27095 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb ...
	I0416 16:44:21.175550   27095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb: {Name:mkd52e22a73bfbe45bc889b3d428bcb585149e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:44:21.175615   27095 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:44:21.175746   27095 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:44:21.175862   27095 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:44:21.175877   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:44:21.175889   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:44:21.175910   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:44:21.175923   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:44:21.175936   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:44:21.175947   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:44:21.175959   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:44:21.175996   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:44:21.176041   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:44:21.176070   27095 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:44:21.176079   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:44:21.176107   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:44:21.176129   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:44:21.176157   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:44:21.176201   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:44:21.176228   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.176242   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.176254   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.176869   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:44:21.216249   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:44:21.241537   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:44:21.268691   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:44:21.294206   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 16:44:21.319468   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:44:21.348907   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:44:21.375630   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:44:21.402095   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:44:21.429641   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:44:21.458195   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:44:21.483551   27095 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:44:21.501981   27095 ssh_runner.go:195] Run: openssl version
	I0416 16:44:21.508355   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:44:21.520580   27095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.525435   27095 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.525482   27095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.531709   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:44:21.542873   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:44:21.555903   27095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.560607   27095 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.560648   27095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.566768   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:44:21.577741   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:44:21.590709   27095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.595547   27095 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.595585   27095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.601502   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:44:21.611952   27095 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:44:21.616857   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 16:44:21.622791   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 16:44:21.629050   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 16:44:21.634982   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 16:44:21.641057   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 16:44:21.648105   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 16:44:21.653969   27095 kubeadm.go:391] StartCluster: {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:44:21.654088   27095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 16:44:21.654273   27095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:44:21.696081   27095 cri.go:89] found id: "34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	I0416 16:44:21.696105   27095 cri.go:89] found id: "c362500f7e55526abcf7249f79b2175d1d1d631675eb2ca2853467620d503f4d"
	I0416 16:44:21.696110   27095 cri.go:89] found id: "5253ff7e10c8b05ddf63d97cc374fa63de54e7da01db140397b9d7c362ec886f"
	I0416 16:44:21.696114   27095 cri.go:89] found id: "c5a3fffcef10ebf58c0c68e68eb1ed85bce4828a270949fad6fcc88bd60a9035"
	I0416 16:44:21.696118   27095 cri.go:89] found id: "77fbefda8f60d33884d3055d8a68bb6fbaeafb8168891df56026217ea04576c5"
	I0416 16:44:21.696122   27095 cri.go:89] found id: "516d3634a70bd6b25e4837c7c531541aa74dae91e0e0fad94f7f5eae6eca436e"
	I0416 16:44:21.696130   27095 cri.go:89] found id: "a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324"
	I0416 16:44:21.696133   27095 cri.go:89] found id: "e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108"
	I0416 16:44:21.696135   27095 cri.go:89] found id: "697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18"
	I0416 16:44:21.696140   27095 cri.go:89] found id: "b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c"
	I0416 16:44:21.696143   27095 cri.go:89] found id: "495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e"
	I0416 16:44:21.696145   27095 cri.go:89] found id: "ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1"
	I0416 16:44:21.696149   27095 cri.go:89] found id: "5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9"
	I0416 16:44:21.696152   27095 cri.go:89] found id: "80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88"
	I0416 16:44:21.696157   27095 cri.go:89] found id: ""
	I0416 16:44:21.696194   27095 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.255835603Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbafab02-b7cc-492f-9079-671b30cf368a name=/runtime.v1.RuntimeService/Version
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.257283329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50d4c077-7fbb-45fb-b355-8b91ab2793bb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.258146076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713286051258120566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50d4c077-7fbb-45fb-b355-8b91ab2793bb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.259177886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03eedc4e-7f86-44e3-8c94-2edecbf44ec5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.259233429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03eedc4e-7f86-44e3-8c94-2edecbf44ec5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.259640969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713285864386239854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713285864380621177,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d31
3882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713285864245183352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e
3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713285861080897629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713285382938105207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kuber
netes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238765247564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238689936081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713285236321635307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713285214233687514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713285214183932146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03eedc4e-7f86-44e3-8c94-2edecbf44ec5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.310071814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53d1fa77-94eb-4f8c-9bf2-18bf6eb3af5f name=/runtime.v1.RuntimeService/Version
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.310471425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53d1fa77-94eb-4f8c-9bf2-18bf6eb3af5f name=/runtime.v1.RuntimeService/Version
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.313454983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ceb0768-699c-4060-bef9-f3df18ce479b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.314022631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713286051313937208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ceb0768-699c-4060-bef9-f3df18ce479b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.314519802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c6c11d6-cae4-49f7-864e-caf92afc6605 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.314573393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c6c11d6-cae4-49f7-864e-caf92afc6605 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.315057422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713285864386239854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713285864380621177,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d31
3882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713285864245183352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e
3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713285861080897629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713285382938105207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kuber
netes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238765247564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238689936081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713285236321635307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713285214233687514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713285214183932146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c6c11d6-cae4-49f7-864e-caf92afc6605 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.367828945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6206a86-354c-46fa-b6ba-c0f6c71430ff name=/runtime.v1.RuntimeService/Version
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.367930067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6206a86-354c-46fa-b6ba-c0f6c71430ff name=/runtime.v1.RuntimeService/Version
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.369497278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88d07a49-eefa-40a5-a361-b50bbc5042d7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.370064874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713286051370037448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88d07a49-eefa-40a5-a361-b50bbc5042d7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.370789456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=379a8720-5af8-452c-9865-ba4d36aa6609 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.370849943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=379a8720-5af8-452c-9865-ba4d36aa6609 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.371569297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713285864386239854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713285864380621177,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d31
3882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713285864245183352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e
3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713285861080897629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713285382938105207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kuber
netes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238765247564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238689936081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713285236321635307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713285214233687514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713285214183932146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=379a8720-5af8-452c-9865-ba4d36aa6609 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.381276856Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8c267ad-b99a-4fbc-b767-4918dcd16ff3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.381528110Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-zmcc2,Uid:861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285897546293982,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:36:21.354429113Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-543552,Uid:3fd17211e1cb9517230e5aacf2735608,Namespace:kube-system,Attempt:0,},State:SANDBOX
_READY,CreatedAt:1713285879859498558,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{kubernetes.io/config.hash: 3fd17211e1cb9517230e5aacf2735608,kubernetes.io/config.seen: 2024-04-16T16:44:20.597466795Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-l9zck,Uid:4f0d01cc-4c32-4953-88ec-f07e72666894,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863883656425,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024
-04-16T16:33:58.112732437Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-k7bn7,Uid:8f45a7f4-5779-49ad-949c-29fe8ad7d485,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863864241515,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:58.101039339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-543552,Uid:82beedbd09d313882734a084237b1940,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863852326596,Labels:map[string]st
ring{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.97:8443,kubernetes.io/config.hash: 82beedbd09d313882734a084237b1940,kubernetes.io/config.seen: 2024-04-16T16:33:41.317144840Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-543552,Uid:b51bc3560314aa63dbce83c0156a5bbe,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863843538415,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,tier: control-
plane,},Annotations:map[string]string{kubernetes.io/config.hash: b51bc3560314aa63dbce83c0156a5bbe,kubernetes.io/config.seen: 2024-04-16T16:33:41.317153911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&PodSandboxMetadata{Name:etcd-ha-543552,Uid:a04ca0e1ec3faa95665bc40ac9b3d994,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863815430682,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.97:2379,kubernetes.io/config.hash: a04ca0e1ec3faa95665bc40ac9b3d994,kubernetes.io/config.seen: 2024-04-16T16:33:41.317155741Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d2f94b0c877730eb30e9c22ac2226ce4af318
6854011a52d01e1c489fd930690,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-543552,Uid:a678895e3a100c5ffc418b140fb8d7e7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863808558524,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a678895e3a100c5ffc418b140fb8d7e7,kubernetes.io/config.seen: 2024-04-16T16:33:41.317152558Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&PodSandboxMetadata{Name:kindnet-7hwtp,Uid:f54400cd-4ab3-4e00-b741-e1419d1b3b66,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863791714356,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:54.365571313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&PodSandboxMetadata{Name:kube-proxy-c9lhc,Uid:b8027952-1449-42c9-9bea-14aa1eb113aa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863779248309,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:54.356723321Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:663f4c76-01f8-4664-9345-740540fdc41c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285860975600289,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-16T16:33:58.114678929Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b8c267ad-b99a-4fbc-b767-4918dcd16ff3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.382507894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=181ef8b9-0b79-49bd-85d2-3209e20290b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.382566153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=181ef8b9-0b79-49bd-85d2-3209e20290b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:47:31 ha-543552 crio[4047]: time="2024-04-16 16:47:31.382798988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e
1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=181ef8b9-0b79-49bd-85d2-3209e20290b6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2f626f23f45f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 seconds ago       Running             storage-provisioner       6                   8a4edbfad9eba       storage-provisioner
	a2e493bd365da       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   d174184f969e7       kindnet-7hwtp
	0ed1e36b4ef80       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Running             kube-apiserver            3                   a216d954b1682       kube-apiserver-ha-543552
	04c928227a1f9       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Running             kube-controller-manager   2                   d2f94b0c87773       kube-controller-manager-ha-543552
	811738ab74e77       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   a526102cd0485       busybox-7fdf7869d9-zmcc2
	9d28dc14e24d9       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   3dfb9b0ef98a7       kube-vip-ha-543552
	30df8eedb316c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   ff16342edad0f       coredns-76f75df574-l9zck
	918c02ba99e66       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago        Running             kube-proxy                1                   5c861f43980e5       kube-proxy-c9lhc
	a279ffbd01e2f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   9bec962e688a9       coredns-76f75df574-k7bn7
	41f892ff8eaf1       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago        Running             kube-scheduler            1                   c7facafbd53b6       kube-scheduler-ha-543552
	20f390b64c98c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago        Exited              kindnet-cni               3                   d174184f969e7       kindnet-7hwtp
	95803f125e402       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago        Exited              kube-apiserver            2                   a216d954b1682       kube-apiserver-ha-543552
	eafcfbd628239       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago        Exited              kube-controller-manager   1                   d2f94b0c87773       kube-controller-manager-ha-543552
	c05f62ae79b1e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   0dd092f506c50       etcd-ha-543552
	34ee105194855       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       5                   8a4edbfad9eba       storage-provisioner
	4eff3ed28c1a6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   0a4cbed3518bb       busybox-7fdf7869d9-zmcc2
	a326689cf68a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   7d0e2bbea0507       coredns-76f75df574-l9zck
	e82d4c4b6df66       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   3c0b61b8ba2ff       coredns-76f75df574-k7bn7
	697fe1db84b5d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago       Exited              kube-proxy                0                   016912d243f9d       kube-proxy-c9lhc
	ce9f179d540bc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   f5aa5ed306340       etcd-ha-543552
	5f7d02aab74a8       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago       Exited              kube-scheduler            0                   158c5349515db       kube-scheduler-ha-543552
	
	
	==> coredns [30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:45458->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45432->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1815561358]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 16:44:37.845) (total time: 10967ms):
	Trace[1815561358]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45432->10.96.0.1:443: read: connection reset by peer 10967ms (16:44:48.812)
	Trace[1815561358]: [10.967618336s] [10.967618336s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45432->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1790753702]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 16:44:36.400) (total time: 12412ms):
	Trace[1790753702]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42466->10.96.0.1:443: read: connection reset by peer 12412ms (16:44:48.812)
	Trace[1790753702]: [12.412082072s] [12.412082072s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324] <==
	[INFO] 10.244.1.2:38888 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185603s
	[INFO] 10.244.0.4:46391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104951s
	[INFO] 10.244.0.4:59290 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001608985s
	[INFO] 10.244.0.4:39400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075172s
	[INFO] 10.244.2.2:50417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152413s
	[INFO] 10.244.2.2:51697 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216701s
	[INFO] 10.244.2.2:46301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158413s
	[INFO] 10.244.1.2:58450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001388s
	[INFO] 10.244.1.2:43346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108795s
	[INFO] 10.244.0.4:44420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074923s
	[INFO] 10.244.0.4:51452 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107645s
	[INFO] 10.244.2.2:44963 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121222s
	[INFO] 10.244.2.2:46302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00020113s
	[INFO] 10.244.2.2:51995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000170275s
	[INFO] 10.244.0.4:40157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126298s
	[INFO] 10.244.0.4:54438 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176652s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1955&timeout=5m1s&timeoutSeconds=301&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1955&timeout=8m39s&timeoutSeconds=519&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108] <==
	[INFO] 10.244.0.4:37034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123057s
	[INFO] 10.244.0.4:56706 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077781s
	[INFO] 10.244.2.2:48795 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014109s
	[INFO] 10.244.1.2:60733 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013497s
	[INFO] 10.244.1.2:47606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137564s
	[INFO] 10.244.0.4:43266 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102784s
	[INFO] 10.244.0.4:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161303s
	[INFO] 10.244.2.2:35260 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000298984s
	[INFO] 10.244.1.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119878s
	[INFO] 10.244.1.2:44462 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168252s
	[INFO] 10.244.1.2:50323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147657s
	[INFO] 10.244.1.2:51016 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131163s
	[INFO] 10.244.0.4:50260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114104s
	[INFO] 10.244.0.4:37053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068482s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1955&timeout=5m0s&timeoutSeconds=300&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1955&timeout=9m7s&timeoutSeconds=547&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1955&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-543552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_33_41_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:33:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-543552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dd8560d23a945a5aa6d3b02a2c3dc1b
	  System UUID:                6dd8560d-23a9-45a5-aa6d-3b02a2c3dc1b
	  Boot ID:                    7c97db37-f0b9-4406-9537-1480d467974d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zmcc2             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-76f75df574-k7bn7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-l9zck             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-543552                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-7hwtp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-543552             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-543552    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-c9lhc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-543552             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-543552                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 2m21s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-543552 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-543552 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-543552 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-543552 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Warning  ContainerGCFailed        3m50s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m14s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   RegisteredNode           2m8s   node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   RegisteredNode           27s    node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	
	
	Name:               ha-543552-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_34_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:34:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:47:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-543552-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2f4c6e70b7c46048863edfff3e863df
	  System UUID:                e2f4c6e7-0b7c-4604-8863-edfff3e863df
	  Boot ID:                    70d47971-e6dc-43b9-9f8c-edccc6c7e460
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7wbjg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-543552-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-q4275                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-543552-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-543552-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-2vkts                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-543552-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-543552-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-543552-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-543552-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-543552-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  NodeNotReady             9m27s                  node-controller  Node ha-543552-m02 status is now: NodeNotReady
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node ha-543552-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s (x7 over 2m44s)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m14s                  node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	
	
	Name:               ha-543552-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_36_05_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:47:03 +0000   Tue, 16 Apr 2024 16:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:47:03 +0000   Tue, 16 Apr 2024 16:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:47:03 +0000   Tue, 16 Apr 2024 16:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:47:03 +0000   Tue, 16 Apr 2024 16:46:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    ha-543552-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 affc17c9d3664ffba11e272d96fa3d10
	  System UUID:                affc17c9-d366-4ffb-a11e-272d96fa3d10
	  Boot ID:                    ee0dd998-f84f-4155-9af5-2745cf61627b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2prpr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-543552-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-6wbkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-543552-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-543552-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-9ncrw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-543552-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-543552-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-543552-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	  Normal   NodeNotReady             95s                node-controller  Node ha-543552-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s (x2 over 59s)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x2 over 59s)  kubelet          Node ha-543552-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x2 over 59s)  kubelet          Node ha-543552-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-543552-m03 has been rebooted, boot id: ee0dd998-f84f-4155-9af5-2745cf61627b
	  Normal   NodeReady                59s                kubelet          Node ha-543552-m03 status is now: NodeReady
	  Normal   RegisteredNode           28s                node-controller  Node ha-543552-m03 event: Registered Node ha-543552-m03 in Controller
	
	
	Name:               ha-543552-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_36_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:36:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:47:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:47:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-543552-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f46fde69f5e74ab18cd1001a10200bfb
	  System UUID:                f46fde69-f5e7-4ab1-8cd1-001a10200bfb
	  Boot ID:                    e27ef21c-b7d2-48b7-9e70-fd2cf7f99c23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hghz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-g5pqm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-543552-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-543552-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   RegisteredNode           2m9s               node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   NodeNotReady             95s                node-controller  Node ha-543552-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           28s                node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-543552-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-543552-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-543552-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-543552-m04 has been rebooted, boot id: e27ef21c-b7d2-48b7-9e70-fd2cf7f99c23
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-543552-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.068457] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.060006] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073697] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.185591] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.154095] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.315435] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.805735] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.066066] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.494086] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.897359] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.972784] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.095897] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.136469] kauditd_printk_skb: 21 callbacks suppressed
	[Apr16 16:34] kauditd_printk_skb: 74 callbacks suppressed
	[Apr16 16:41] kauditd_printk_skb: 1 callbacks suppressed
	[Apr16 16:44] systemd-fstab-generator[3966]: Ignoring "noauto" option for root device
	[  +0.160416] systemd-fstab-generator[3978]: Ignoring "noauto" option for root device
	[  +0.188010] systemd-fstab-generator[3992]: Ignoring "noauto" option for root device
	[  +0.157092] systemd-fstab-generator[4004]: Ignoring "noauto" option for root device
	[  +0.298229] systemd-fstab-generator[4032]: Ignoring "noauto" option for root device
	[  +0.917006] systemd-fstab-generator[4134]: Ignoring "noauto" option for root device
	[  +3.370999] kauditd_printk_skb: 140 callbacks suppressed
	[ +15.882746] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.355899] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98] <==
	{"level":"warn","ts":"2024-04-16T16:46:27.763658Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:30.689852Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1f324d4b7ab8c99d","rtt":"0s","error":"dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:30.689884Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1f324d4b7ab8c99d","rtt":"0s","error":"dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:31.765894Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.125:2380/version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:31.766195Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:35.690936Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1f324d4b7ab8c99d","rtt":"0s","error":"dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:35.69104Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1f324d4b7ab8c99d","rtt":"0s","error":"dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:35.768271Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.125:2380/version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:35.768354Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:39.77068Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.125:2380/version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:39.770816Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:40.691761Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1f324d4b7ab8c99d","rtt":"0s","error":"dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:40.691879Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1f324d4b7ab8c99d","rtt":"0s","error":"dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-16T16:46:42.109745Z","caller":"traceutil/trace.go:171","msg":"trace[119166285] transaction","detail":"{read_only:false; response_revision:2509; number_of_response:1; }","duration":"118.307498ms","start":"2024-04-16T16:46:41.991402Z","end":"2024-04-16T16:46:42.10971Z","steps":["trace[119166285] 'process raft request'  (duration: 118.14831ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:46:43.776733Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.125:2380/version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T16:46:43.776893Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1f324d4b7ab8c99d","error":"Get \"https://192.168.39.125:2380/version\": dial tcp 192.168.39.125:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-16T16:46:43.931144Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:46:43.931234Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:46:43.953227Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:46:43.96738Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"1f324d4b7ab8c99d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-16T16:46:43.967475Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:46:43.990767Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"1f324d4b7ab8c99d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-16T16:46:43.990943Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:46:46.338834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.744769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-543552-m03\" ","response":"range_response_count:1 size:5801"}
	{"level":"info","ts":"2024-04-16T16:46:46.340277Z","caller":"traceutil/trace.go:171","msg":"trace[1831489289] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-543552-m03; range_end:; response_count:1; response_revision:2520; }","duration":"142.32909ms","start":"2024-04-16T16:46:46.197921Z","end":"2024-04-16T16:46:46.34025Z","steps":["trace[1831489289] 'agreement among raft nodes before linearized reading'  (duration: 78.244694ms)","trace[1831489289] 'range keys from in-memory index tree'  (duration: 62.459691ms)"],"step_count":2}
	
	
	==> etcd [ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1] <==
	2024/04/16 16:42:47 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-16T16:42:47.434802Z","caller":"traceutil/trace.go:171","msg":"trace[417903395] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"428.202181ms","start":"2024-04-16T16:42:47.006594Z","end":"2024-04-16T16:42:47.434797Z","steps":["trace[417903395] 'agreement among raft nodes before linearized reading'  (duration: 410.832589ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:42:47.435086Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T16:42:47.006583Z","time spent":"428.491426ms","remote":"127.0.0.1:46958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:500 "}
	2024/04/16 16:42:47 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-16T16:42:47.577146Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7869634524914769506,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-16T16:42:47.697474Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T16:42:47.697604Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T16:42:47.697746Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"f61fae125a956d36","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-16T16:42:47.697926Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698039Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698123Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698259Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.69841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698487Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698501Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698512Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698521Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.69854Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698653Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698683Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698708Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698718Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.702152Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-16T16:42:47.702385Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-16T16:42:47.702425Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-543552","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> kernel <==
	 16:47:32 up 14 min,  0 users,  load average: 0.42, 0.49, 0.34
	Linux ha-543552 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b] <==
	I0416 16:44:25.099371       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0416 16:44:25.099455       1 main.go:107] hostIP = 192.168.39.97
	podIP = 192.168.39.97
	I0416 16:44:25.099611       1 main.go:116] setting mtu 1500 for CNI 
	I0416 16:44:25.099660       1 main.go:146] kindnetd IP family: "ipv4"
	I0416 16:44:25.099698       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0416 16:44:27.308606       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0416 16:44:37.309700       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0416 16:44:48.812527       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.135:32802->10.96.0.1:443: read: connection reset by peer
	I0416 16:44:51.884347       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0416 16:44:54.956518       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1] <==
	I0416 16:46:54.562845       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:47:04.580623       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:47:04.580674       1 main.go:227] handling current node
	I0416 16:47:04.580771       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:47:04.580800       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:47:04.581126       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:47:04.581174       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:47:04.581253       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:47:04.581277       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:47:14.598748       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:47:14.598853       1 main.go:227] handling current node
	I0416 16:47:14.598876       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:47:14.599005       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:47:14.599139       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:47:14.599163       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:47:14.599223       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:47:14.599241       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:47:24.614493       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:47:24.615884       1 main.go:227] handling current node
	I0416 16:47:24.616002       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:47:24.616035       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:47:24.616160       1 main.go:223] Handling node with IPs: map[192.168.39.125:{}]
	I0416 16:47:24.616181       1 main.go:250] Node ha-543552-m03 has CIDR [10.244.2.0/24] 
	I0416 16:47:24.616234       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:47:24.616251       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1] <==
	I0416 16:45:10.329770       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0416 16:45:10.329806       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 16:45:10.329827       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 16:45:10.338730       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 16:45:10.339133       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 16:45:10.457796       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:45:10.525527       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 16:45:10.525754       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 16:45:10.525790       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 16:45:10.525903       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:45:10.526697       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:45:10.527893       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 16:45:10.530752       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:45:10.531593       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:45:10.531741       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:45:10.531849       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:45:10.531892       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:45:10.539560       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0416 16:45:10.539606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.125 192.168.39.80]
	I0416 16:45:10.541216       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:45:10.551286       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0416 16:45:10.559724       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0416 16:45:11.339139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0416 16:45:11.781356       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.125 192.168.39.80 192.168.39.97]
	W0416 16:45:21.783473       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.80 192.168.39.97]
	
	
	==> kube-apiserver [95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9] <==
	I0416 16:44:25.009012       1 options.go:222] external host was not specified, using 192.168.39.97
	I0416 16:44:25.012042       1 server.go:148] Version: v1.29.3
	I0416 16:44:25.012102       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:44:25.731479       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0416 16:44:25.731528       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0416 16:44:25.731773       1 instance.go:297] Using reconciler: lease
	I0416 16:44:25.734522       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W0416 16:44:45.728081       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0416 16:44:45.728276       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0416 16:44:45.734607       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09] <==
	I0416 16:45:23.386436       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0416 16:45:23.409049       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0416 16:45:23.418277       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 16:45:23.737158       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:45:23.737342       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0416 16:45:23.776924       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 16:45:29.683886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="35.28047ms"
	I0416 16:45:29.684051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="99.597µs"
	I0416 16:45:48.446363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="19.618116ms"
	I0416 16:45:48.450213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="310.073µs"
	I0416 16:45:48.476719       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0416 16:45:48.487472       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-9nrpt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9nrpt\": the object has been modified; please apply your changes to the latest version and try again"
	I0416 16:45:48.487557       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4244f2fc-b308-467d-9783-27f85fb3d90d", APIVersion:"v1", ResourceVersion:"245", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9nrpt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9nrpt": the object has been modified; please apply your changes to the latest version and try again
	I0416 16:45:54.430034       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-9nrpt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-9nrpt\": the object has been modified; please apply your changes to the latest version and try again"
	I0416 16:45:54.430203       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4244f2fc-b308-467d-9783-27f85fb3d90d", APIVersion:"v1", ResourceVersion:"245", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-9nrpt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-9nrpt": the object has been modified; please apply your changes to the latest version and try again
	I0416 16:45:54.430706       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0416 16:45:54.461489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="57.510585ms"
	I0416 16:45:54.461659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.846µs"
	I0416 16:45:57.975924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.108739ms"
	I0416 16:45:57.976511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="43.176µs"
	I0416 16:46:34.286501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="101.405µs"
	I0416 16:46:37.993367       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2prpr" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-2prpr"
	I0416 16:46:55.843814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.557268ms"
	I0416 16:46:55.844022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.933µs"
	I0416 16:47:23.772618       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-543552-m04"
	
	
	==> kube-controller-manager [eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f] <==
	I0416 16:44:25.968889       1 serving.go:380] Generated self-signed cert in-memory
	I0416 16:44:26.291344       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0416 16:44:26.291392       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:44:26.293333       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 16:44:26.293395       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 16:44:26.293632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0416 16:44:26.294361       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0416 16:44:46.742249       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.97:8443/healthz\": dial tcp 192.168.39.97:8443: connect: connection refused"
	
	
	==> kube-proxy [697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18] <==
	E0416 16:41:44.751738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:47.822704       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:47.822863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:47.823067       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:47.823261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:47.823638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:47.823761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:53.966263       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:53.966367       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:53.966594       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:53.966747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:57.037502       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:57.037607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:06.253685       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:06.254085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:06.254286       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:06.254383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:09.325178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:09.325460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:24.687660       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:24.687911       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:27.756585       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:27.756656       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:30.828670       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:30.828809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b] <==
	E0416 16:44:49.069280       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-543552\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 16:45:10.575126       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-543552\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0416 16:45:10.575244       1 server.go:1020] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0416 16:45:10.626343       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:45:10.626446       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:45:10.626515       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:45:10.631583       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:45:10.631886       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:45:10.632281       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:45:10.633854       1 config.go:188] "Starting service config controller"
	I0416 16:45:10.634109       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:45:10.634184       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:45:10.634208       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:45:10.634263       1 config.go:315] "Starting node config controller"
	I0416 16:45:10.634301       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0416 16:45:13.644740       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0416 16:45:13.644943       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:45:13.645151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:45:13.644935       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:45:13.645220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:45:13.645404       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:45:13.645484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0416 16:45:14.734940       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:45:15.035677       1 shared_informer.go:318] Caches are synced for node config
	I0416 16:45:15.235083       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2] <==
	W0416 16:45:02.242471       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.97:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:02.242517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.97:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:02.382252       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.97:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:02.382360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.97:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:02.419297       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:02.419368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:03.205588       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.97:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:03.205715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.97:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.102617       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.102749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.286693       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.97:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.286760       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.97:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.317395       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.97:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.317464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.97:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.751421       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.97:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.751513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.97:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:05.223868       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:05.224133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:05.528909       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:05.529154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:06.774295       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:06.774375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:08.345476       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.97:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:08.345567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.97:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	I0416 16:45:23.749291       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9] <==
	E0416 16:42:44.784738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:42:45.356938       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:42:45.357100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:42:45.647158       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:42:45.647262       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:42:46.150484       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:42:46.150586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:42:46.166256       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:42:46.166336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:42:46.232926       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 16:42:46.233089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 16:42:46.543123       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:42:46.543155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:42:46.739242       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 16:42:46.739269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 16:42:46.930812       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:42:46.930840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:42:47.123025       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:42:47.123090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:42:47.287904       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:42:47.287930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0416 16:42:47.389557       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 16:42:47.389822       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 16:42:47.399072       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 16:42:47.399300       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 16 16:45:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:45:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:45:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:45:43 ha-543552 kubelet[1371]: I0416 16:45:43.381244    1371 scope.go:117] "RemoveContainer" containerID="20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b"
	Apr 16 16:45:54 ha-543552 kubelet[1371]: I0416 16:45:54.380860    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:45:54 ha-543552 kubelet[1371]: E0416 16:45:54.381167    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:45:59 ha-543552 kubelet[1371]: I0416 16:45:59.381366    1371 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-543552" podUID="73f7261f-431b-4d66-9567-cd65dafbf212"
	Apr 16 16:45:59 ha-543552 kubelet[1371]: I0416 16:45:59.412610    1371 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-543552"
	Apr 16 16:46:06 ha-543552 kubelet[1371]: I0416 16:46:06.380819    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:46:06 ha-543552 kubelet[1371]: E0416 16:46:06.381383    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:46:19 ha-543552 kubelet[1371]: I0416 16:46:19.381542    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:46:19 ha-543552 kubelet[1371]: E0416 16:46:19.381807    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:46:30 ha-543552 kubelet[1371]: I0416 16:46:30.381712    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:46:30 ha-543552 kubelet[1371]: E0416 16:46:30.382163    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:46:41 ha-543552 kubelet[1371]: E0416 16:46:41.431883    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:46:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:46:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:46:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:46:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:46:45 ha-543552 kubelet[1371]: I0416 16:46:45.381343    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:46:45 ha-543552 kubelet[1371]: E0416 16:46:45.381564    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:47:00 ha-543552 kubelet[1371]: I0416 16:47:00.381090    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:47:00 ha-543552 kubelet[1371]: E0416 16:47:00.381831    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:47:14 ha-543552 kubelet[1371]: I0416 16:47:14.380853    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:47:14 ha-543552 kubelet[1371]: I0416 16:47:14.911669    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-543552" podStartSLOduration=75.911571355 podStartE2EDuration="1m15.911571355s" podCreationTimestamp="2024-04-16 16:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-16 16:46:01.401611284 +0000 UTC m=+740.236063285" watchObservedRunningTime="2024-04-16 16:47:14.911571355 +0000 UTC m=+813.746023360"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 16:47:30.830895   28518 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18649-3628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-543552 -n ha-543552
helpers_test.go:261: (dbg) Run:  kubectl --context ha-543552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (409.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 stop -v=7 --alsologtostderr
E0416 16:48:26.934647   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 stop -v=7 --alsologtostderr: exit status 82 (2m0.503074911s)

                                                
                                                
-- stdout --
	* Stopping node "ha-543552-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:47:51.478914   28940 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:47:51.479028   28940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:47:51.479037   28940 out.go:304] Setting ErrFile to fd 2...
	I0416 16:47:51.479041   28940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:47:51.479226   28940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:47:51.479462   28940 out.go:298] Setting JSON to false
	I0416 16:47:51.479537   28940 mustload.go:65] Loading cluster: ha-543552
	I0416 16:47:51.479881   28940 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:47:51.479967   28940 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:47:51.480147   28940 mustload.go:65] Loading cluster: ha-543552
	I0416 16:47:51.480272   28940 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:47:51.480296   28940 stop.go:39] StopHost: ha-543552-m04
	I0416 16:47:51.480682   28940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:47:51.480732   28940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:47:51.499042   28940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0416 16:47:51.499589   28940 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:47:51.500159   28940 main.go:141] libmachine: Using API Version  1
	I0416 16:47:51.500181   28940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:47:51.500626   28940 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:47:51.503077   28940 out.go:177] * Stopping node "ha-543552-m04"  ...
	I0416 16:47:51.504637   28940 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 16:47:51.504666   28940 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:47:51.504938   28940 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 16:47:51.504971   28940 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:47:51.507627   28940 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:47:51.507996   28940 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:47:18 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:47:51.508035   28940 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:47:51.508216   28940 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:47:51.508400   28940 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:47:51.508560   28940 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:47:51.508676   28940 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	I0416 16:47:51.598540   28940 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 16:47:51.654813   28940 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 16:47:51.713082   28940 main.go:141] libmachine: Stopping "ha-543552-m04"...
	I0416 16:47:51.713111   28940 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:47:51.714636   28940 main.go:141] libmachine: (ha-543552-m04) Calling .Stop
	I0416 16:47:51.718549   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 0/120
	I0416 16:47:52.720159   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 1/120
	I0416 16:47:53.721637   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 2/120
	I0416 16:47:54.723723   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 3/120
	I0416 16:47:55.724975   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 4/120
	I0416 16:47:56.726966   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 5/120
	I0416 16:47:57.728459   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 6/120
	I0416 16:47:58.730001   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 7/120
	I0416 16:47:59.731543   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 8/120
	I0416 16:48:00.733136   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 9/120
	I0416 16:48:01.735242   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 10/120
	I0416 16:48:02.736578   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 11/120
	I0416 16:48:03.737954   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 12/120
	I0416 16:48:04.739455   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 13/120
	I0416 16:48:05.740821   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 14/120
	I0416 16:48:06.743245   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 15/120
	I0416 16:48:07.744727   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 16/120
	I0416 16:48:08.746275   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 17/120
	I0416 16:48:09.747627   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 18/120
	I0416 16:48:10.748938   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 19/120
	I0416 16:48:11.751436   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 20/120
	I0416 16:48:12.752969   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 21/120
	I0416 16:48:13.754915   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 22/120
	I0416 16:48:14.756470   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 23/120
	I0416 16:48:15.758013   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 24/120
	I0416 16:48:16.760355   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 25/120
	I0416 16:48:17.761949   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 26/120
	I0416 16:48:18.763653   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 27/120
	I0416 16:48:19.765140   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 28/120
	I0416 16:48:20.767230   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 29/120
	I0416 16:48:21.769721   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 30/120
	I0416 16:48:22.771252   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 31/120
	I0416 16:48:23.772745   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 32/120
	I0416 16:48:24.774188   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 33/120
	I0416 16:48:25.775628   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 34/120
	I0416 16:48:26.777184   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 35/120
	I0416 16:48:27.778596   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 36/120
	I0416 16:48:28.780064   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 37/120
	I0416 16:48:29.781756   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 38/120
	I0416 16:48:30.783911   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 39/120
	I0416 16:48:31.786174   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 40/120
	I0416 16:48:32.787769   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 41/120
	I0416 16:48:33.789218   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 42/120
	I0416 16:48:34.790976   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 43/120
	I0416 16:48:35.792846   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 44/120
	I0416 16:48:36.794702   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 45/120
	I0416 16:48:37.796666   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 46/120
	I0416 16:48:38.798017   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 47/120
	I0416 16:48:39.799533   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 48/120
	I0416 16:48:40.800937   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 49/120
	I0416 16:48:41.803222   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 50/120
	I0416 16:48:42.804539   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 51/120
	I0416 16:48:43.806018   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 52/120
	I0416 16:48:44.807529   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 53/120
	I0416 16:48:45.809801   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 54/120
	I0416 16:48:46.811370   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 55/120
	I0416 16:48:47.813131   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 56/120
	I0416 16:48:48.814452   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 57/120
	I0416 16:48:49.816254   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 58/120
	I0416 16:48:50.817910   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 59/120
	I0416 16:48:51.820183   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 60/120
	I0416 16:48:52.821580   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 61/120
	I0416 16:48:53.823255   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 62/120
	I0416 16:48:54.824915   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 63/120
	I0416 16:48:55.826368   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 64/120
	I0416 16:48:56.828655   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 65/120
	I0416 16:48:57.830110   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 66/120
	I0416 16:48:58.831764   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 67/120
	I0416 16:48:59.833449   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 68/120
	I0416 16:49:00.834907   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 69/120
	I0416 16:49:01.837203   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 70/120
	I0416 16:49:02.839211   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 71/120
	I0416 16:49:03.840642   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 72/120
	I0416 16:49:04.841818   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 73/120
	I0416 16:49:05.843114   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 74/120
	I0416 16:49:06.845602   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 75/120
	I0416 16:49:07.847557   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 76/120
	I0416 16:49:08.849017   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 77/120
	I0416 16:49:09.850313   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 78/120
	I0416 16:49:10.851713   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 79/120
	I0416 16:49:11.853593   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 80/120
	I0416 16:49:12.855202   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 81/120
	I0416 16:49:13.856594   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 82/120
	I0416 16:49:14.858059   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 83/120
	I0416 16:49:15.859534   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 84/120
	I0416 16:49:16.861547   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 85/120
	I0416 16:49:17.863142   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 86/120
	I0416 16:49:18.864663   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 87/120
	I0416 16:49:19.866139   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 88/120
	I0416 16:49:20.867769   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 89/120
	I0416 16:49:21.869941   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 90/120
	I0416 16:49:22.871238   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 91/120
	I0416 16:49:23.872698   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 92/120
	I0416 16:49:24.874031   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 93/120
	I0416 16:49:25.875377   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 94/120
	I0416 16:49:26.877523   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 95/120
	I0416 16:49:27.879351   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 96/120
	I0416 16:49:28.880921   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 97/120
	I0416 16:49:29.882388   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 98/120
	I0416 16:49:30.883803   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 99/120
	I0416 16:49:31.885719   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 100/120
	I0416 16:49:32.887118   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 101/120
	I0416 16:49:33.888437   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 102/120
	I0416 16:49:34.890548   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 103/120
	I0416 16:49:35.892037   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 104/120
	I0416 16:49:36.894049   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 105/120
	I0416 16:49:37.895531   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 106/120
	I0416 16:49:38.896790   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 107/120
	I0416 16:49:39.898029   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 108/120
	I0416 16:49:40.899321   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 109/120
	I0416 16:49:41.900787   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 110/120
	I0416 16:49:42.902071   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 111/120
	I0416 16:49:43.903293   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 112/120
	I0416 16:49:44.904919   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 113/120
	I0416 16:49:45.906117   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 114/120
	I0416 16:49:46.907849   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 115/120
	I0416 16:49:47.909872   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 116/120
	I0416 16:49:48.911311   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 117/120
	I0416 16:49:49.912825   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 118/120
	I0416 16:49:50.914324   28940 main.go:141] libmachine: (ha-543552-m04) Waiting for machine to stop 119/120
	I0416 16:49:51.915296   28940 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 16:49:51.915360   28940 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 16:49:51.917501   28940 out.go:177] 
	W0416 16:49:51.919156   28940 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 16:49:51.919172   28940 out.go:239] * 
	* 
	W0416 16:49:51.922242   28940 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 16:49:51.923738   28940 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-543552 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr: exit status 3 (19.116075342s)

                                                
                                                
-- stdout --
	ha-543552
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-543552-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:49:51.984203   29365 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:49:51.984333   29365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:49:51.984342   29365 out.go:304] Setting ErrFile to fd 2...
	I0416 16:49:51.984346   29365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:49:51.984556   29365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:49:51.984735   29365 out.go:298] Setting JSON to false
	I0416 16:49:51.984761   29365 mustload.go:65] Loading cluster: ha-543552
	I0416 16:49:51.984807   29365 notify.go:220] Checking for updates...
	I0416 16:49:51.985270   29365 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:49:51.985292   29365 status.go:255] checking status of ha-543552 ...
	I0416 16:49:51.985708   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:51.985783   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.004506   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44119
	I0416 16:49:52.005071   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.005690   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.005718   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.006119   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.006350   29365 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:49:52.008009   29365 status.go:330] ha-543552 host status = "Running" (err=<nil>)
	I0416 16:49:52.008029   29365 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:49:52.008350   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.008390   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.023792   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0416 16:49:52.024194   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.024618   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.024639   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.025007   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.025218   29365 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:49:52.028060   29365 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:49:52.028455   29365 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:49:52.028494   29365 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:49:52.028590   29365 host.go:66] Checking if "ha-543552" exists ...
	I0416 16:49:52.028910   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.028953   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.043434   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41037
	I0416 16:49:52.043904   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.044424   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.044445   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.044748   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.044948   29365 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:49:52.045101   29365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:49:52.045128   29365 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:49:52.047997   29365 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:49:52.048390   29365 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:49:52.048427   29365 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:49:52.048578   29365 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:49:52.048755   29365 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:49:52.048943   29365 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:49:52.049131   29365 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:49:52.136081   29365 ssh_runner.go:195] Run: systemctl --version
	I0416 16:49:52.144253   29365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:49:52.164211   29365 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:49:52.164245   29365 api_server.go:166] Checking apiserver status ...
	I0416 16:49:52.164278   29365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:49:52.185237   29365 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5328/cgroup
	W0416 16:49:52.200014   29365 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5328/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:49:52.200061   29365 ssh_runner.go:195] Run: ls
	I0416 16:49:52.205627   29365 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:49:52.212653   29365 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:49:52.212681   29365 status.go:422] ha-543552 apiserver status = Running (err=<nil>)
	I0416 16:49:52.212692   29365 status.go:257] ha-543552 status: &{Name:ha-543552 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:49:52.212709   29365 status.go:255] checking status of ha-543552-m02 ...
	I0416 16:49:52.213050   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.213091   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.228042   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0416 16:49:52.228537   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.229049   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.229068   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.229388   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.229592   29365 main.go:141] libmachine: (ha-543552-m02) Calling .GetState
	I0416 16:49:52.231285   29365 status.go:330] ha-543552-m02 host status = "Running" (err=<nil>)
	I0416 16:49:52.231301   29365 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:49:52.231612   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.231656   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.247632   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0416 16:49:52.248054   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.248583   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.248621   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.249008   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.249199   29365 main.go:141] libmachine: (ha-543552-m02) Calling .GetIP
	I0416 16:49:52.251905   29365 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:49:52.252312   29365 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:44:34 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:49:52.252335   29365 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:49:52.252451   29365 host.go:66] Checking if "ha-543552-m02" exists ...
	I0416 16:49:52.252783   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.252818   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.267431   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0416 16:49:52.267834   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.268384   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.268403   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.268756   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.268975   29365 main.go:141] libmachine: (ha-543552-m02) Calling .DriverName
	I0416 16:49:52.269289   29365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:49:52.269315   29365 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHHostname
	I0416 16:49:52.272228   29365 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:49:52.272738   29365 main.go:141] libmachine: (ha-543552-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:b0:d7", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:44:34 +0000 UTC Type:0 Mac:52:54:00:bd:b0:d7 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-543552-m02 Clientid:01:52:54:00:bd:b0:d7}
	I0416 16:49:52.272763   29365 main.go:141] libmachine: (ha-543552-m02) DBG | domain ha-543552-m02 has defined IP address 192.168.39.80 and MAC address 52:54:00:bd:b0:d7 in network mk-ha-543552
	I0416 16:49:52.272925   29365 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHPort
	I0416 16:49:52.273105   29365 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHKeyPath
	I0416 16:49:52.273260   29365 main.go:141] libmachine: (ha-543552-m02) Calling .GetSSHUsername
	I0416 16:49:52.273428   29365 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m02/id_rsa Username:docker}
	I0416 16:49:52.363865   29365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 16:49:52.384493   29365 kubeconfig.go:125] found "ha-543552" server: "https://192.168.39.254:8443"
	I0416 16:49:52.384520   29365 api_server.go:166] Checking apiserver status ...
	I0416 16:49:52.384550   29365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 16:49:52.405790   29365 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup
	W0416 16:49:52.420214   29365 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 16:49:52.420278   29365 ssh_runner.go:195] Run: ls
	I0416 16:49:52.427034   29365 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 16:49:52.432825   29365 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 16:49:52.432878   29365 status.go:422] ha-543552-m02 apiserver status = Running (err=<nil>)
	I0416 16:49:52.432893   29365 status.go:257] ha-543552-m02 status: &{Name:ha-543552-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 16:49:52.432918   29365 status.go:255] checking status of ha-543552-m04 ...
	I0416 16:49:52.433202   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.433251   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.448492   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0416 16:49:52.448898   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.449466   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.449492   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.449813   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.450073   29365 main.go:141] libmachine: (ha-543552-m04) Calling .GetState
	I0416 16:49:52.451794   29365 status.go:330] ha-543552-m04 host status = "Running" (err=<nil>)
	I0416 16:49:52.451809   29365 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:49:52.452103   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.452142   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.467388   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
	I0416 16:49:52.467804   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.468270   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.468293   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.468625   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.468816   29365 main.go:141] libmachine: (ha-543552-m04) Calling .GetIP
	I0416 16:49:52.471283   29365 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:49:52.471706   29365 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:47:18 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:49:52.471736   29365 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:49:52.471887   29365 host.go:66] Checking if "ha-543552-m04" exists ...
	I0416 16:49:52.472193   29365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:49:52.472230   29365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:49:52.487218   29365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0416 16:49:52.487646   29365 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:49:52.488130   29365 main.go:141] libmachine: Using API Version  1
	I0416 16:49:52.488151   29365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:49:52.488458   29365 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:49:52.488648   29365 main.go:141] libmachine: (ha-543552-m04) Calling .DriverName
	I0416 16:49:52.488807   29365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 16:49:52.488825   29365 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHHostname
	I0416 16:49:52.491109   29365 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:49:52.491512   29365 main.go:141] libmachine: (ha-543552-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:80:93", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:47:18 +0000 UTC Type:0 Mac:52:54:00:ef:80:93 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:ha-543552-m04 Clientid:01:52:54:00:ef:80:93}
	I0416 16:49:52.491538   29365 main.go:141] libmachine: (ha-543552-m04) DBG | domain ha-543552-m04 has defined IP address 192.168.39.126 and MAC address 52:54:00:ef:80:93 in network mk-ha-543552
	I0416 16:49:52.491685   29365 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHPort
	I0416 16:49:52.491860   29365 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHKeyPath
	I0416 16:49:52.492022   29365 main.go:141] libmachine: (ha-543552-m04) Calling .GetSSHUsername
	I0416 16:49:52.492167   29365 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552-m04/id_rsa Username:docker}
	W0416 16:50:11.041090   29365 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.126:22: connect: no route to host
	W0416 16:50:11.041171   29365 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0416 16:50:11.041209   29365 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	I0416 16:50:11.041219   29365 status.go:257] ha-543552-m04 status: &{Name:ha-543552-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0416 16:50:11.041240   29365 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-543552 -n ha-543552
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-543552 logs -n 25: (1.9286641s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m04 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp testdata/cp-test.txt                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552:/home/docker/cp-test_ha-543552-m04_ha-543552.txt                       |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552 sudo cat                                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552.txt                                 |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m02:/home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m02 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m03:/home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n                                                                 | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | ha-543552-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-543552 ssh -n ha-543552-m03 sudo cat                                          | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC | 16 Apr 24 16:37 UTC |
	|         | /home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-543552 node stop m02 -v=7                                                     | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-543552 node start m02 -v=7                                                    | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-543552 -v=7                                                           | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-543552 -v=7                                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:40 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-543552 --wait=true -v=7                                                    | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:42 UTC | 16 Apr 24 16:47 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-543552                                                                | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	| node    | ha-543552 node delete m03 -v=7                                                   | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC | 16 Apr 24 16:47 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-543552 stop -v=7                                                              | ha-543552 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:42:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:42:46.447058   27095 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:42:46.447302   27095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:42:46.447311   27095 out.go:304] Setting ErrFile to fd 2...
	I0416 16:42:46.447315   27095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:42:46.447472   27095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:42:46.447981   27095 out.go:298] Setting JSON to false
	I0416 16:42:46.448825   27095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1518,"bootTime":1713284248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:42:46.448895   27095 start.go:139] virtualization: kvm guest
	I0416 16:42:46.452101   27095 out.go:177] * [ha-543552] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:42:46.453764   27095 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:42:46.453790   27095 notify.go:220] Checking for updates...
	I0416 16:42:46.455420   27095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:42:46.457087   27095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:42:46.458556   27095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:42:46.459940   27095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:42:46.461380   27095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:42:46.463064   27095 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:42:46.463174   27095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:42:46.463569   27095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:42:46.463626   27095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:42:46.478825   27095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36183
	I0416 16:42:46.479301   27095 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:42:46.479926   27095 main.go:141] libmachine: Using API Version  1
	I0416 16:42:46.479950   27095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:42:46.480354   27095 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:42:46.480517   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:42:46.514609   27095 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 16:42:46.515781   27095 start.go:297] selected driver: kvm2
	I0416 16:42:46.515793   27095 start.go:901] validating driver "kvm2" against &{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:42:46.515952   27095 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:42:46.516289   27095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:42:46.516361   27095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:42:46.530554   27095 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:42:46.532268   27095 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 16:42:46.532316   27095 cni.go:84] Creating CNI manager for ""
	I0416 16:42:46.532322   27095 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 16:42:46.532369   27095 start.go:340] cluster config:
	{Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:42:46.532512   27095 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:42:46.535045   27095 out.go:177] * Starting "ha-543552" primary control-plane node in "ha-543552" cluster
	I0416 16:42:46.536321   27095 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:42:46.536360   27095 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 16:42:46.536371   27095 cache.go:56] Caching tarball of preloaded images
	I0416 16:42:46.536453   27095 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 16:42:46.536465   27095 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 16:42:46.536571   27095 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/config.json ...
	I0416 16:42:46.536766   27095 start.go:360] acquireMachinesLock for ha-543552: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 16:42:46.536809   27095 start.go:364] duration metric: took 23.709µs to acquireMachinesLock for "ha-543552"
	I0416 16:42:46.536825   27095 start.go:96] Skipping create...Using existing machine configuration
	I0416 16:42:46.536894   27095 fix.go:54] fixHost starting: 
	I0416 16:42:46.537168   27095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:42:46.537201   27095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:42:46.550576   27095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0416 16:42:46.551006   27095 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:42:46.551530   27095 main.go:141] libmachine: Using API Version  1
	I0416 16:42:46.551554   27095 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:42:46.551881   27095 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:42:46.552113   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:42:46.552309   27095 main.go:141] libmachine: (ha-543552) Calling .GetState
	I0416 16:42:46.553860   27095 fix.go:112] recreateIfNeeded on ha-543552: state=Running err=<nil>
	W0416 16:42:46.553899   27095 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 16:42:46.556652   27095 out.go:177] * Updating the running kvm2 "ha-543552" VM ...
	I0416 16:42:46.558030   27095 machine.go:94] provisionDockerMachine start ...
	I0416 16:42:46.558051   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:42:46.558281   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.560779   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.561257   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.561281   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.561434   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:46.561615   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.561758   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.561894   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:46.562034   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:46.562267   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:46.562285   27095 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 16:42:46.698478   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552
	
	I0416 16:42:46.698513   27095 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:42:46.698742   27095 buildroot.go:166] provisioning hostname "ha-543552"
	I0416 16:42:46.698771   27095 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:42:46.698972   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.701667   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.702042   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.702079   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.702200   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:46.702374   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.702545   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.702679   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:46.702854   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:46.703073   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:46.703103   27095 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-543552 && echo "ha-543552" | sudo tee /etc/hostname
	I0416 16:42:46.842581   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-543552
	
	I0416 16:42:46.842616   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.845249   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.845646   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.845674   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.845797   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:46.845985   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.846166   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:46.846312   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:46.846476   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:46.846650   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:46.846668   27095 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-543552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-543552/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-543552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 16:42:46.958123   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 16:42:46.958150   27095 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 16:42:46.958187   27095 buildroot.go:174] setting up certificates
	I0416 16:42:46.958195   27095 provision.go:84] configureAuth start
	I0416 16:42:46.958203   27095 main.go:141] libmachine: (ha-543552) Calling .GetMachineName
	I0416 16:42:46.958562   27095 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:42:46.961300   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.961665   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.961691   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.961780   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:46.964088   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.964404   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:46.964436   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:46.964555   27095 provision.go:143] copyHostCerts
	I0416 16:42:46.964585   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:42:46.964634   27095 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 16:42:46.964655   27095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 16:42:46.964738   27095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 16:42:46.964850   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:42:46.964878   27095 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 16:42:46.964889   27095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 16:42:46.964928   27095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 16:42:46.964999   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:42:46.965023   27095 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 16:42:46.965032   27095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 16:42:46.965072   27095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 16:42:46.965156   27095 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.ha-543552 san=[127.0.0.1 192.168.39.97 ha-543552 localhost minikube]
	I0416 16:42:47.089013   27095 provision.go:177] copyRemoteCerts
	I0416 16:42:47.089078   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 16:42:47.089103   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:47.091521   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.091970   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:47.091994   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.092209   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:47.092417   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:47.092573   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:47.092683   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:42:47.182484   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 16:42:47.182553   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 16:42:47.213899   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 16:42:47.213969   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 16:42:47.242781   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 16:42:47.242837   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 16:42:47.269636   27095 provision.go:87] duration metric: took 311.431382ms to configureAuth
	I0416 16:42:47.269661   27095 buildroot.go:189] setting minikube options for container-runtime
	I0416 16:42:47.269886   27095 config.go:182] Loaded profile config "ha-543552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:42:47.269960   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:42:47.272653   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.273050   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:42:47.273080   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:42:47.273284   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:42:47.273472   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:47.273643   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:42:47.273782   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:42:47.273942   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:42:47.274091   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:42:47.274106   27095 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 16:44:18.195577   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 16:44:18.195602   27095 machine.go:97] duration metric: took 1m31.637556524s to provisionDockerMachine
	I0416 16:44:18.195615   27095 start.go:293] postStartSetup for "ha-543552" (driver="kvm2")
	I0416 16:44:18.195626   27095 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 16:44:18.195652   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.196023   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 16:44:18.196058   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.199049   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.199487   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.199545   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.199609   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.199817   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.200003   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.200111   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:44:18.286804   27095 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 16:44:18.291585   27095 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 16:44:18.291621   27095 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 16:44:18.291686   27095 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 16:44:18.291769   27095 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 16:44:18.291782   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 16:44:18.291885   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 16:44:18.303191   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:44:18.330048   27095 start.go:296] duration metric: took 134.420713ms for postStartSetup
	I0416 16:44:18.330085   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.330361   27095 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0416 16:44:18.330390   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.333009   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.333592   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.333632   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.333765   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.333928   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.334079   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.334186   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	W0416 16:44:18.422063   27095 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0416 16:44:18.422088   27095 fix.go:56] duration metric: took 1m31.885254681s for fixHost
	I0416 16:44:18.422108   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.424776   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.425135   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.425163   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.425298   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.425493   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.425636   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.425794   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.425999   27095 main.go:141] libmachine: Using SSH client type: native
	I0416 16:44:18.426152   27095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0416 16:44:18.426163   27095 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 16:44:18.538052   27095 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713285858.503032874
	
	I0416 16:44:18.538077   27095 fix.go:216] guest clock: 1713285858.503032874
	I0416 16:44:18.538084   27095 fix.go:229] Guest: 2024-04-16 16:44:18.503032874 +0000 UTC Remote: 2024-04-16 16:44:18.422095403 +0000 UTC m=+92.020966215 (delta=80.937471ms)
	I0416 16:44:18.538117   27095 fix.go:200] guest clock delta is within tolerance: 80.937471ms
	I0416 16:44:18.538123   27095 start.go:83] releasing machines lock for "ha-543552", held for 1m32.001303379s
	I0416 16:44:18.538146   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.538391   27095 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:44:18.541053   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.541472   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.541497   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.541680   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.542150   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.542307   27095 main.go:141] libmachine: (ha-543552) Calling .DriverName
	I0416 16:44:18.542377   27095 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 16:44:18.542413   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.542594   27095 ssh_runner.go:195] Run: cat /version.json
	I0416 16:44:18.542620   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHHostname
	I0416 16:44:18.545148   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.545365   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.545552   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.545582   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.545935   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.545993   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:18.546034   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:18.546086   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHPort
	I0416 16:44:18.546168   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.546237   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHKeyPath
	I0416 16:44:18.546309   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.546370   27095 main.go:141] libmachine: (ha-543552) Calling .GetSSHUsername
	I0416 16:44:18.546431   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:44:18.546464   27095 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/ha-543552/id_rsa Username:docker}
	I0416 16:44:18.656667   27095 ssh_runner.go:195] Run: systemctl --version
	I0416 16:44:18.663268   27095 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 16:44:18.830629   27095 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 16:44:18.841127   27095 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 16:44:18.841185   27095 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 16:44:18.851335   27095 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 16:44:18.851353   27095 start.go:494] detecting cgroup driver to use...
	I0416 16:44:18.851405   27095 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 16:44:18.869324   27095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 16:44:18.883599   27095 docker.go:217] disabling cri-docker service (if available) ...
	I0416 16:44:18.883648   27095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 16:44:18.897905   27095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 16:44:18.912065   27095 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 16:44:19.069099   27095 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 16:44:19.224737   27095 docker.go:233] disabling docker service ...
	I0416 16:44:19.224802   27095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 16:44:19.241830   27095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 16:44:19.258250   27095 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 16:44:19.413597   27095 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 16:44:19.569044   27095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 16:44:19.583698   27095 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 16:44:19.605543   27095 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 16:44:19.605594   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.616939   27095 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 16:44:19.616989   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.628331   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.639298   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.650604   27095 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 16:44:19.662984   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.673916   27095 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.687477   27095 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 16:44:19.699416   27095 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 16:44:19.709355   27095 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 16:44:19.719075   27095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:44:19.871768   27095 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 16:44:20.247056   27095 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 16:44:20.247115   27095 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 16:44:20.253402   27095 start.go:562] Will wait 60s for crictl version
	I0416 16:44:20.253469   27095 ssh_runner.go:195] Run: which crictl
	I0416 16:44:20.258304   27095 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 16:44:20.307506   27095 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 16:44:20.307594   27095 ssh_runner.go:195] Run: crio --version
	I0416 16:44:20.341651   27095 ssh_runner.go:195] Run: crio --version
	I0416 16:44:20.375038   27095 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 16:44:20.376427   27095 main.go:141] libmachine: (ha-543552) Calling .GetIP
	I0416 16:44:20.379091   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:20.379551   27095 main.go:141] libmachine: (ha-543552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:bc:28", ip: ""} in network mk-ha-543552: {Iface:virbr1 ExpiryTime:2024-04-16 17:33:13 +0000 UTC Type:0 Mac:52:54:00:3d:bc:28 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-543552 Clientid:01:52:54:00:3d:bc:28}
	I0416 16:44:20.379578   27095 main.go:141] libmachine: (ha-543552) DBG | domain ha-543552 has defined IP address 192.168.39.97 and MAC address 52:54:00:3d:bc:28 in network mk-ha-543552
	I0416 16:44:20.379783   27095 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 16:44:20.385066   27095 kubeadm.go:877] updating cluster {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 16:44:20.385203   27095 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 16:44:20.385250   27095 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:44:20.432240   27095 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 16:44:20.432260   27095 crio.go:433] Images already preloaded, skipping extraction
	I0416 16:44:20.432306   27095 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 16:44:20.469407   27095 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 16:44:20.469428   27095 cache_images.go:84] Images are preloaded, skipping loading
	I0416 16:44:20.469436   27095 kubeadm.go:928] updating node { 192.168.39.97 8443 v1.29.3 crio true true} ...
	I0416 16:44:20.469515   27095 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-543552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 16:44:20.469574   27095 ssh_runner.go:195] Run: crio config
	I0416 16:44:20.522144   27095 cni.go:84] Creating CNI manager for ""
	I0416 16:44:20.522170   27095 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 16:44:20.522178   27095 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 16:44:20.522200   27095 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-543552 NodeName:ha-543552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 16:44:20.522322   27095 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-543552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 16:44:20.522341   27095 kube-vip.go:111] generating kube-vip config ...
	I0416 16:44:20.522377   27095 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 16:44:20.535342   27095 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 16:44:20.535425   27095 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 16:44:20.535471   27095 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 16:44:20.545523   27095 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 16:44:20.545582   27095 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 16:44:20.556672   27095 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0416 16:44:20.575004   27095 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 16:44:20.594022   27095 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0416 16:44:20.612388   27095 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 16:44:20.632015   27095 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 16:44:20.636676   27095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 16:44:20.800163   27095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 16:44:20.819297   27095 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552 for IP: 192.168.39.97
	I0416 16:44:20.819318   27095 certs.go:194] generating shared ca certs ...
	I0416 16:44:20.819333   27095 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:44:20.819472   27095 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 16:44:20.819509   27095 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 16:44:20.819519   27095 certs.go:256] generating profile certs ...
	I0416 16:44:20.819596   27095 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/client.key
	I0416 16:44:20.819621   27095 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb
	I0416 16:44:20.819633   27095 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.97 192.168.39.80 192.168.39.125 192.168.39.254]
	I0416 16:44:21.175357   27095 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb ...
	I0416 16:44:21.175385   27095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb: {Name:mk1501f25805c360dbf87b20b36f8d058b5d5d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:44:21.175539   27095 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb ...
	I0416 16:44:21.175550   27095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb: {Name:mkd52e22a73bfbe45bc889b3d428bcb585149e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:44:21.175615   27095 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt.8b6168fb -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt
	I0416 16:44:21.175746   27095 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key.8b6168fb -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key
	I0416 16:44:21.175862   27095 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key
	I0416 16:44:21.175877   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 16:44:21.175889   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 16:44:21.175910   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 16:44:21.175923   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 16:44:21.175936   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 16:44:21.175947   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 16:44:21.175959   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 16:44:21.175996   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 16:44:21.176041   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 16:44:21.176070   27095 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 16:44:21.176079   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 16:44:21.176107   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 16:44:21.176129   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 16:44:21.176157   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 16:44:21.176201   27095 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 16:44:21.176228   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.176242   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.176254   27095 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.176869   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 16:44:21.216249   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 16:44:21.241537   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 16:44:21.268691   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 16:44:21.294206   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 16:44:21.319468   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 16:44:21.348907   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 16:44:21.375630   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/ha-543552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 16:44:21.402095   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 16:44:21.429641   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 16:44:21.458195   27095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 16:44:21.483551   27095 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 16:44:21.501981   27095 ssh_runner.go:195] Run: openssl version
	I0416 16:44:21.508355   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 16:44:21.520580   27095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.525435   27095 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.525482   27095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 16:44:21.531709   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 16:44:21.542873   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 16:44:21.555903   27095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.560607   27095 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.560648   27095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 16:44:21.566768   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 16:44:21.577741   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 16:44:21.590709   27095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.595547   27095 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.595585   27095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 16:44:21.601502   27095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 16:44:21.611952   27095 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 16:44:21.616857   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 16:44:21.622791   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 16:44:21.629050   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 16:44:21.634982   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 16:44:21.641057   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 16:44:21.648105   27095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 16:44:21.653969   27095 kubeadm.go:391] StartCluster: {Name:ha-543552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-543552 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.80 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.125 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.126 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:44:21.654088   27095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 16:44:21.654273   27095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 16:44:21.696081   27095 cri.go:89] found id: "34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	I0416 16:44:21.696105   27095 cri.go:89] found id: "c362500f7e55526abcf7249f79b2175d1d1d631675eb2ca2853467620d503f4d"
	I0416 16:44:21.696110   27095 cri.go:89] found id: "5253ff7e10c8b05ddf63d97cc374fa63de54e7da01db140397b9d7c362ec886f"
	I0416 16:44:21.696114   27095 cri.go:89] found id: "c5a3fffcef10ebf58c0c68e68eb1ed85bce4828a270949fad6fcc88bd60a9035"
	I0416 16:44:21.696118   27095 cri.go:89] found id: "77fbefda8f60d33884d3055d8a68bb6fbaeafb8168891df56026217ea04576c5"
	I0416 16:44:21.696122   27095 cri.go:89] found id: "516d3634a70bd6b25e4837c7c531541aa74dae91e0e0fad94f7f5eae6eca436e"
	I0416 16:44:21.696130   27095 cri.go:89] found id: "a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324"
	I0416 16:44:21.696133   27095 cri.go:89] found id: "e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108"
	I0416 16:44:21.696135   27095 cri.go:89] found id: "697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18"
	I0416 16:44:21.696140   27095 cri.go:89] found id: "b4d4b03694327669172d4c84094090377c45750fe6f9c88d01902e8ce4533e8c"
	I0416 16:44:21.696143   27095 cri.go:89] found id: "495afba1f754949aaef7119e4381e04765b4e7d7bf3db3238fbd33033f21635e"
	I0416 16:44:21.696145   27095 cri.go:89] found id: "ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1"
	I0416 16:44:21.696149   27095 cri.go:89] found id: "5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9"
	I0416 16:44:21.696152   27095 cri.go:89] found id: "80fb22fd3cc49a7c837b2def0b2ce51d6a4611a1251ba6ed7f9a92a230c59f88"
	I0416 16:44:21.696157   27095 cri.go:89] found id: ""
	I0416 16:44:21.696194   27095 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.821780635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4209d41a-24e2-4fa9-9b35-566f2fd8e909 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.823430129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93a123ef-b5ef-4012-b1cf-8cde8d642070 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.824277437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713286211824180378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93a123ef-b5ef-4012-b1cf-8cde8d642070 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.824862614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc38a6b9-9a73-44ea-9a21-9c17ad39dead name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.825000267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc38a6b9-9a73-44ea-9a21-9c17ad39dead name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.825423886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713285864386239854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713285864380621177,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d31
3882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713285864245183352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e
3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713285861080897629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713285382938105207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kuber
netes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238765247564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238689936081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713285236321635307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713285214233687514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713285214183932146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc38a6b9-9a73-44ea-9a21-9c17ad39dead name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.880658775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9dfa3ac4-a27b-4ba8-a1c2-d051fae35bd7 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.880776022Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9dfa3ac4-a27b-4ba8-a1c2-d051fae35bd7 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.882828044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8d2a37a-c104-46fa-90bf-a2dcf3a058fa name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.883655647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713286211883611259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8d2a37a-c104-46fa-90bf-a2dcf3a058fa name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.884285571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a1a37a3-1cc9-4281-bbae-be65443277a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.884346578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a1a37a3-1cc9-4281-bbae-be65443277a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.884823882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713285864386239854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713285864380621177,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d31
3882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713285864245183352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e
3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713285861080897629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713285382938105207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kuber
netes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238765247564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238689936081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713285236321635307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713285214233687514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713285214183932146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a1a37a3-1cc9-4281-bbae-be65443277a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.918555413Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fd607b72-50c5-4f65-9400-58d9cedb780d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.919028091Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-zmcc2,Uid:861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285897546293982,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:36:21.354429113Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-543552,Uid:3fd17211e1cb9517230e5aacf2735608,Namespace:kube-system,Attempt:0,},State:SANDBOX
_READY,CreatedAt:1713285879859498558,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{kubernetes.io/config.hash: 3fd17211e1cb9517230e5aacf2735608,kubernetes.io/config.seen: 2024-04-16T16:44:20.597466795Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-l9zck,Uid:4f0d01cc-4c32-4953-88ec-f07e72666894,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863883656425,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024
-04-16T16:33:58.112732437Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-k7bn7,Uid:8f45a7f4-5779-49ad-949c-29fe8ad7d485,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863864241515,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:58.101039339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-543552,Uid:82beedbd09d313882734a084237b1940,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863852326596,Labels:map[string]st
ring{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.97:8443,kubernetes.io/config.hash: 82beedbd09d313882734a084237b1940,kubernetes.io/config.seen: 2024-04-16T16:33:41.317144840Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-543552,Uid:b51bc3560314aa63dbce83c0156a5bbe,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863843538415,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,tier: control-
plane,},Annotations:map[string]string{kubernetes.io/config.hash: b51bc3560314aa63dbce83c0156a5bbe,kubernetes.io/config.seen: 2024-04-16T16:33:41.317153911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&PodSandboxMetadata{Name:etcd-ha-543552,Uid:a04ca0e1ec3faa95665bc40ac9b3d994,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863815430682,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.97:2379,kubernetes.io/config.hash: a04ca0e1ec3faa95665bc40ac9b3d994,kubernetes.io/config.seen: 2024-04-16T16:33:41.317155741Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d2f94b0c877730eb30e9c22ac2226ce4af318
6854011a52d01e1c489fd930690,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-543552,Uid:a678895e3a100c5ffc418b140fb8d7e7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863808558524,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a678895e3a100c5ffc418b140fb8d7e7,kubernetes.io/config.seen: 2024-04-16T16:33:41.317152558Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&PodSandboxMetadata{Name:kindnet-7hwtp,Uid:f54400cd-4ab3-4e00-b741-e1419d1b3b66,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863791714356,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:54.365571313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&PodSandboxMetadata{Name:kube-proxy-c9lhc,Uid:b8027952-1449-42c9-9bea-14aa1eb113aa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285863779248309,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:54.356723321Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:663f4c76-01f8-4664-9345-740540fdc41c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713285860975600289,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-16T16:33:58.114678929Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-zmcc2,Uid:861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713285381698605780,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:36:21.354429113Z,kubernetes.io/config.sou
rce: api,},RuntimeHandler:,},&PodSandbox{Id:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-l9zck,Uid:4f0d01cc-4c32-4953-88ec-f07e72666894,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713285238431491042,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:58.112732437Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-k7bn7,Uid:8f45a7f4-5779-49ad-949c-29fe8ad7d485,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713285238408409315,Labels:map[string]string{io.kubernetes.container.name: POD,io
.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:58.101039339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&PodSandboxMetadata{Name:kube-proxy-c9lhc,Uid:b8027952-1449-42c9-9bea-14aa1eb113aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713285236195194567,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T16:33:54.356723321Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-543552,Uid:b51bc3560314aa63dbce83c0156a5bbe,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713285213953233309,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b51bc3560314aa63dbce83c0156a5bbe,kubernetes.io/config.seen: 2024-04-16T16:33:33.467589800Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&PodSandboxMetadata{Name:etcd-ha-543552,Uid:a04ca0e1ec3faa95665bc40ac9b3d994,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713285213920346146,Labels:map[string]string{component: etcd,io.kuberne
tes.container.name: POD,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.97:2379,kubernetes.io/config.hash: a04ca0e1ec3faa95665bc40ac9b3d994,kubernetes.io/config.seen: 2024-04-16T16:33:33.467584052Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fd607b72-50c5-4f65-9400-58d9cedb780d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.919888391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ba13bb6-920b-4a90-8942-d2e6cd222fb5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.920052680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ba13bb6-920b-4a90-8942-d2e6cd222fb5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.920794355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713285864386239854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713285864380621177,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d31
3882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713285864245183352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e
3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713285861080897629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713285382938105207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kuber
netes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238765247564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238689936081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713285236321635307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713285214233687514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713285214183932146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ba13bb6-920b-4a90-8942-d2e6cd222fb5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.943583551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf30f9d0-9f70-4f6c-91d4-c8238c747d32 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.943696884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf30f9d0-9f70-4f6c-91d4-c8238c747d32 name=/runtime.v1.RuntimeService/Version
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.947075185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12d9f068-76f8-4800-834f-741e768934a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.948341339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713286211948312895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12d9f068-76f8-4800-834f-741e768934a8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.949194206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=966f4424-d3bf-445b-b3d8-cc5b12aa09d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.949356281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=966f4424-d3bf-445b-b3d8-cc5b12aa09d7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 16:50:11 ha-543552 crio[4047]: time="2024-04-16 16:50:11.950376191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f626f23f45f45114f63396fa72b114930ec60451bc8e3ecd87dbd51a757e6b5,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713286034408549393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713285943393319204,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713285908399227992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d313882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713285907396041694,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811738ab74e7743b024d87fcbf087efef3a91fd5cffd0f0125dd87cd5a63f426,PodSandboxId:a526102cd04858e49044061ce0b169735d51665d43a7bd98791e8997610854d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713285897692499532,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kubernetes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d28dc14e24d93141915f5d854b997e42c83d402327718cd3878be9782d19db9,PodSandboxId:3dfb9b0ef98a713d03198a49033ddb59a2095df971e0e6b9ee164766fbe6808d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713285879959711575,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd17211e1cb9517230e5aacf2735608,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b,PodSandboxId:5c861f43980e520a8544af4f7b46973dffe182d38e8d300bb2c64d673e23eca8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713285864732057766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051,PodSandboxId:ff16342edad0f07ac4b3ff1d92e0d081a9d3bfa8814c1083c9158cfae424dce4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864779216681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b,PodSandboxId:d174184f969e78a9e5fe76cdee10aff7cfa757733984c349acd94264a2352ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713285864386239854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7hwtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54400cd-4ab3-4e00-b741-e1419d1b3b66,},Annotations:map[string]string{io.kubernetes.container.hash: 4129a9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c,PodSandboxId:9bec962e688a90ec80bf268e3c5781f27f8c13b9ea5ae5b29376f6f3763bd6db,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713285864537111852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2,PodSandboxId:c7facafbd53b6730753db3466730da837c16aba2665204c761db92c34a75d177,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713285864490679318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9,PodSandboxId:a216d954b1682d2a5c66957c325e27ac4de39afeb820cdb5e738336b748f83f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713285864380621177,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82beedbd09d31
3882734a084237b1940,},Annotations:map[string]string{io.kubernetes.container.hash: e6915094,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f,PodSandboxId:d2f94b0c877730eb30e9c22ac2226ce4af3186854011a52d01e1c489fd930690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713285864245183352,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a678895e
3a100c5ffc418b140fb8d7e7,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98,PodSandboxId:0dd092f506c50c343809875518e8018b4a4d7d47bfb5b49fd1bf028829b22ab9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713285864217104521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd,PodSandboxId:8a4edbfad9eba8d4aa4d900956bf20f873a800764a8d68c5a39bed214ac836da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713285861080897629,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663f4c76-01f8-4664-9345-740540fdc41c,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1010e69d,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eff3ed28c1a672770376efdce9bcb75cf45eedd5c76097423767f2684f0af65,PodSandboxId:0a4cbed3518bba63bbcb25cbb0546e3defbc7a01f69758a907eebf537ebd95a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713285382938105207,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-zmcc2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 861a27be-8ca1-4ed8-aa6d-1cecb2b3a77c,},Annotations:map[string]string{io.kuber
netes.container.hash: d88a9c68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324,PodSandboxId:7d0e2bbea0507f951198a52848508f493ec449863b0505de372eee2c62c501cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238765247564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-l9zck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0d01cc-4c32-4953-88ec-f07e72666894,},Annotations:map[string]string{io.kubernetes.container.hash: 99ef0152,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108,PodSandboxId:3c0b61b8ba2ff364b0c1ad4ff87b9e2cfe29bec2926ba30936ba2d685e8faa84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713285238689936081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-k7bn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f45a7f4-5779-49ad-949c-29fe8ad7d485,},Annotations:map[string]string{io.kubernetes.container.hash: d61ce4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18,PodSandboxId:016912d243f9d1fd44814e9cf8cb3497c3bcb5e73396c9027da07c3f048d84b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713285236321635307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9lhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8027952-1449-42c9-9bea-14aa1eb113aa,},Annotations:map[string]string{io.kubernetes.container.hash: 344a32f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1,PodSandboxId:f5aa5ed306340377864faef1538af89f46c4c351380c4492b8961f2586b51d97,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713285214233687514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a04ca0e1ec3faa95665bc40ac9b3d994,},Annotations:map[string]string{io.kubernetes.container.hash: 8b8643f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9,PodSandboxId:158c5349515dbe314f29202d2df32329a205f7adeb270c87d0a5bd5e9fe368c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713285214183932146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-543552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51bc3560314aa63dbce83c0156a5bbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=966f4424-d3bf-445b-b3d8-cc5b12aa09d7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f626f23f45f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   8a4edbfad9eba       storage-provisioner
	a2e493bd365da       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               4                   d174184f969e7       kindnet-7hwtp
	0ed1e36b4ef80       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      5 minutes ago       Running             kube-apiserver            3                   a216d954b1682       kube-apiserver-ha-543552
	04c928227a1f9       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      5 minutes ago       Running             kube-controller-manager   2                   d2f94b0c87773       kube-controller-manager-ha-543552
	811738ab74e77       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   a526102cd0485       busybox-7fdf7869d9-zmcc2
	9d28dc14e24d9       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  0                   3dfb9b0ef98a7       kube-vip-ha-543552
	30df8eedb316c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   ff16342edad0f       coredns-76f75df574-l9zck
	918c02ba99e66       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      5 minutes ago       Running             kube-proxy                1                   5c861f43980e5       kube-proxy-c9lhc
	a279ffbd01e2f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   9bec962e688a9       coredns-76f75df574-k7bn7
	41f892ff8eaf1       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      5 minutes ago       Running             kube-scheduler            1                   c7facafbd53b6       kube-scheduler-ha-543552
	20f390b64c98c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               3                   d174184f969e7       kindnet-7hwtp
	95803f125e402       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      5 minutes ago       Exited              kube-apiserver            2                   a216d954b1682       kube-apiserver-ha-543552
	eafcfbd628239       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      5 minutes ago       Exited              kube-controller-manager   1                   d2f94b0c87773       kube-controller-manager-ha-543552
	c05f62ae79b1e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   0dd092f506c50       etcd-ha-543552
	34ee105194855       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   8a4edbfad9eba       storage-provisioner
	4eff3ed28c1a6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   0a4cbed3518bb       busybox-7fdf7869d9-zmcc2
	a326689cf68a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   7d0e2bbea0507       coredns-76f75df574-l9zck
	e82d4c4b6df66       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   3c0b61b8ba2ff       coredns-76f75df574-k7bn7
	697fe1db84b5d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      16 minutes ago      Exited              kube-proxy                0                   016912d243f9d       kube-proxy-c9lhc
	ce9f179d540bc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   f5aa5ed306340       etcd-ha-543552
	5f7d02aab74a8       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      16 minutes ago      Exited              kube-scheduler            0                   158c5349515db       kube-scheduler-ha-543552
	
	
	==> coredns [30df8eedb316c2a93d62896de91b95ae32a5d62671673e6a82ed240833a25051] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:45458->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45432->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1815561358]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 16:44:37.845) (total time: 10967ms):
	Trace[1815561358]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45432->10.96.0.1:443: read: connection reset by peer 10967ms (16:44:48.812)
	Trace[1815561358]: [10.967618336s] [10.967618336s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:45432->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a279ffbd01e2f075598454f38aa06026d47f22d5c2fac24b64f42cd110e84b3c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1790753702]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 16:44:36.400) (total time: 12412ms):
	Trace[1790753702]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42466->10.96.0.1:443: read: connection reset by peer 12412ms (16:44:48.812)
	Trace[1790753702]: [12.412082072s] [12.412082072s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a326689cf68a6fa866f3c49fbb9dfc28c92404355a123db2be4ddc2cae077324] <==
	[INFO] 10.244.1.2:38888 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185603s
	[INFO] 10.244.0.4:46391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104951s
	[INFO] 10.244.0.4:59290 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001608985s
	[INFO] 10.244.0.4:39400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075172s
	[INFO] 10.244.2.2:50417 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152413s
	[INFO] 10.244.2.2:51697 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000216701s
	[INFO] 10.244.2.2:46301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158413s
	[INFO] 10.244.1.2:58450 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001388s
	[INFO] 10.244.1.2:43346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108795s
	[INFO] 10.244.0.4:44420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074923s
	[INFO] 10.244.0.4:51452 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107645s
	[INFO] 10.244.2.2:44963 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121222s
	[INFO] 10.244.2.2:46302 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00020113s
	[INFO] 10.244.2.2:51995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000170275s
	[INFO] 10.244.0.4:40157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126298s
	[INFO] 10.244.0.4:54438 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176652s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1955&timeout=5m1s&timeoutSeconds=301&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1955&timeout=8m39s&timeoutSeconds=519&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e82d4c4b6df6628c8091d7856ed2f5743fd2e4cd897d7a5b2c613115a5074108] <==
	[INFO] 10.244.0.4:37034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123057s
	[INFO] 10.244.0.4:56706 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077781s
	[INFO] 10.244.2.2:48795 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014109s
	[INFO] 10.244.1.2:60733 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013497s
	[INFO] 10.244.1.2:47606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137564s
	[INFO] 10.244.0.4:43266 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102784s
	[INFO] 10.244.0.4:35773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161303s
	[INFO] 10.244.2.2:35260 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000298984s
	[INFO] 10.244.1.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119878s
	[INFO] 10.244.1.2:44462 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168252s
	[INFO] 10.244.1.2:50323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147657s
	[INFO] 10.244.1.2:51016 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131163s
	[INFO] 10.244.0.4:50260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114104s
	[INFO] 10.244.0.4:37053 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068482s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1955&timeout=5m0s&timeoutSeconds=300&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1955&timeout=9m7s&timeoutSeconds=547&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1955&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-543552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T16_33_41_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:33:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:50:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:45:12 +0000   Tue, 16 Apr 2024 16:33:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-543552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dd8560d23a945a5aa6d3b02a2c3dc1b
	  System UUID:                6dd8560d-23a9-45a5-aa6d-3b02a2c3dc1b
	  Boot ID:                    7c97db37-f0b9-4406-9537-1480d467974d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zmcc2             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-k7bn7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-l9zck             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-543552                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-7hwtp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-543552             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-543552    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-c9lhc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-543552             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-543552                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 5m1s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-543552 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-543552 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-543552 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-543552 status is now: NodeReady
	  Normal   RegisteredNode           15m    node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Warning  ContainerGCFailed        6m31s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m55s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   RegisteredNode           4m49s  node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	  Normal   RegisteredNode           3m8s   node-controller  Node ha-543552 event: Registered Node ha-543552 in Controller
	
	
	Name:               ha-543552-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_34_54_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:34:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:50:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 16:45:56 +0000   Tue, 16 Apr 2024 16:45:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-543552-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2f4c6e70b7c46048863edfff3e863df
	  System UUID:                e2f4c6e7-0b7c-4604-8863-edfff3e863df
	  Boot ID:                    70d47971-e6dc-43b9-9f8c-edccc6c7e460
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7wbjg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-543552-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-q4275                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-543552-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-543552-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-2vkts                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-543552-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-543552-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-543552-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-543552-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-543552-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-543552-m02 status is now: NodeNotReady
	  Normal  Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-543552-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-543552-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-543552-m02 event: Registered Node ha-543552-m02 in Controller
	
	
	Name:               ha-543552-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-543552-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=ha-543552
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T16_36_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 16:36:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-543552-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 16:47:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:48:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:48:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:48:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 16 Apr 2024 16:47:23 +0000   Tue, 16 Apr 2024 16:48:28 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    ha-543552-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f46fde69f5e74ab18cd1001a10200bfb
	  System UUID:                f46fde69-f5e7-4ab1-8cd1-001a10200bfb
	  Boot ID:                    e27ef21c-b7d2-48b7-9e70-fd2cf7f99c23
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2cwwc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-4hghz               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-g5pqm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-543552-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-543552-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-543552-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-543552-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   RegisteredNode           3m8s                   node-controller  Node ha-543552-m04 event: Registered Node ha-543552-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-543552-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-543552-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-543552-m04 has been rebooted, boot id: e27ef21c-b7d2-48b7-9e70-fd2cf7f99c23
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-543552-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s (x2 over 4m15s)   node-controller  Node ha-543552-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.068457] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.060006] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073697] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.185591] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.154095] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.315435] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.805735] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.066066] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.494086] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.897359] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.972784] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.095897] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.136469] kauditd_printk_skb: 21 callbacks suppressed
	[Apr16 16:34] kauditd_printk_skb: 74 callbacks suppressed
	[Apr16 16:41] kauditd_printk_skb: 1 callbacks suppressed
	[Apr16 16:44] systemd-fstab-generator[3966]: Ignoring "noauto" option for root device
	[  +0.160416] systemd-fstab-generator[3978]: Ignoring "noauto" option for root device
	[  +0.188010] systemd-fstab-generator[3992]: Ignoring "noauto" option for root device
	[  +0.157092] systemd-fstab-generator[4004]: Ignoring "noauto" option for root device
	[  +0.298229] systemd-fstab-generator[4032]: Ignoring "noauto" option for root device
	[  +0.917006] systemd-fstab-generator[4134]: Ignoring "noauto" option for root device
	[  +3.370999] kauditd_printk_skb: 140 callbacks suppressed
	[ +15.882746] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.355899] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [c05f62ae79b1e1ec783af0bd26d44b8ca1e930de1836e216a5b70a7c668afa98] <==
	{"level":"info","ts":"2024-04-16T16:46:43.953227Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:46:43.96738Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"1f324d4b7ab8c99d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-16T16:46:43.967475Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:46:43.990767Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"f61fae125a956d36","to":"1f324d4b7ab8c99d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-16T16:46:43.990943Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:46:46.338834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.744769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-543552-m03\" ","response":"range_response_count:1 size:5801"}
	{"level":"info","ts":"2024-04-16T16:46:46.340277Z","caller":"traceutil/trace.go:171","msg":"trace[1831489289] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-543552-m03; range_end:; response_count:1; response_revision:2520; }","duration":"142.32909ms","start":"2024-04-16T16:46:46.197921Z","end":"2024-04-16T16:46:46.34025Z","steps":["trace[1831489289] 'agreement among raft nodes before linearized reading'  (duration: 78.244694ms)","trace[1831489289] 'range keys from in-memory index tree'  (duration: 62.459691ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T16:47:37.520245Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.125:50320","server-name":"","error":"read tcp 192.168.39.97:2379->192.168.39.125:50320: read: connection reset by peer"}
	{"level":"info","ts":"2024-04-16T16:47:37.553334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(12397722251222865940 17735085251460689206)"}
	{"level":"info","ts":"2024-04-16T16:47:37.55649Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","removed-remote-peer-id":"1f324d4b7ab8c99d","removed-remote-peer-urls":["https://192.168.39.125:2380"]}
	{"level":"info","ts":"2024-04-16T16:47:37.556562Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:47:37.556791Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:47:37.556852Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:47:37.557286Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:47:37.557349Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:47:37.557757Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:47:37.558252Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d","error":"context canceled"}
	{"level":"warn","ts":"2024-04-16T16:47:37.558341Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1f324d4b7ab8c99d","error":"failed to read 1f324d4b7ab8c99d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-16T16:47:37.558383Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:47:37.558721Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-04-16T16:47:37.558847Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:47:37.559358Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:47:37.560067Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"f61fae125a956d36","removed-remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:47:37.585346Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"f61fae125a956d36","remote-peer-id-stream-handler":"f61fae125a956d36","remote-peer-id-from":"1f324d4b7ab8c99d"}
	{"level":"warn","ts":"2024-04-16T16:47:37.593201Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.125:50152","server-name":"","error":"read tcp 192.168.39.97:2380->192.168.39.125:50152: read: connection reset by peer"}
	
	
	==> etcd [ce9f179d540bce7e36ec975501df557438f0c56ee7afd9d298a3ee94561fe8d1] <==
	2024/04/16 16:42:47 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-16T16:42:47.434802Z","caller":"traceutil/trace.go:171","msg":"trace[417903395] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"428.202181ms","start":"2024-04-16T16:42:47.006594Z","end":"2024-04-16T16:42:47.434797Z","steps":["trace[417903395] 'agreement among raft nodes before linearized reading'  (duration: 410.832589ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T16:42:47.435086Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T16:42:47.006583Z","time spent":"428.491426ms","remote":"127.0.0.1:46958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:500 "}
	2024/04/16 16:42:47 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-16T16:42:47.577146Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7869634524914769506,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-16T16:42:47.697474Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T16:42:47.697604Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.97:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T16:42:47.697746Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"f61fae125a956d36","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-16T16:42:47.697926Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698039Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698123Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698259Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.69841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698487Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698501Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ac0d8eb398185814"}
	{"level":"info","ts":"2024-04-16T16:42:47.698512Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698521Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.69854Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698653Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698683Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698708Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"f61fae125a956d36","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.698718Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1f324d4b7ab8c99d"}
	{"level":"info","ts":"2024-04-16T16:42:47.702152Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-16T16:42:47.702385Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-04-16T16:42:47.702425Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-543552","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	
	==> kernel <==
	 16:50:12 up 17 min,  0 users,  load average: 0.31, 0.37, 0.31
	Linux ha-543552 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [20f390b64c98cdb62a5d4c7a541068b7440ea97eaeb33182993a5f0318eadd0b] <==
	I0416 16:44:25.099371       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0416 16:44:25.099455       1 main.go:107] hostIP = 192.168.39.97
	podIP = 192.168.39.97
	I0416 16:44:25.099611       1 main.go:116] setting mtu 1500 for CNI 
	I0416 16:44:25.099660       1 main.go:146] kindnetd IP family: "ipv4"
	I0416 16:44:25.099698       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0416 16:44:27.308606       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0416 16:44:37.309700       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0416 16:44:48.812527       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.135:32802->10.96.0.1:443: read: connection reset by peer
	I0416 16:44:51.884347       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0416 16:44:54.956518       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [a2e493bd365dacec65de6d719b1f0a452ee8eea7d27d8ad14f6f2db88988e3d1] <==
	I0416 16:49:24.911454       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:49:34.921113       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:49:34.921164       1 main.go:227] handling current node
	I0416 16:49:34.921179       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:49:34.921185       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:49:34.921300       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:49:34.921306       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:49:44.955774       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:49:44.955826       1 main.go:227] handling current node
	I0416 16:49:44.955842       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:49:44.955848       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:49:44.956020       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:49:44.956028       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:49:54.972059       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:49:54.972082       1 main.go:227] handling current node
	I0416 16:49:54.972092       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:49:54.972098       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:49:54.972217       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:49:54.972222       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	I0416 16:50:04.989363       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I0416 16:50:04.989412       1 main.go:227] handling current node
	I0416 16:50:04.989426       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0416 16:50:04.989432       1 main.go:250] Node ha-543552-m02 has CIDR [10.244.1.0/24] 
	I0416 16:50:04.990034       1 main.go:223] Handling node with IPs: map[192.168.39.126:{}]
	I0416 16:50:04.990070       1 main.go:250] Node ha-543552-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0ed1e36b4ef809e70cad620c21ce45463c969b769e7a3880a44a136a39240ad1] <==
	I0416 16:45:10.329806       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 16:45:10.329827       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 16:45:10.338730       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 16:45:10.339133       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 16:45:10.457796       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 16:45:10.525527       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 16:45:10.525754       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 16:45:10.525790       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 16:45:10.525903       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 16:45:10.526697       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 16:45:10.527893       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 16:45:10.530752       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 16:45:10.531593       1 aggregator.go:165] initial CRD sync complete...
	I0416 16:45:10.531741       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 16:45:10.531849       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 16:45:10.531892       1 cache.go:39] Caches are synced for autoregister controller
	I0416 16:45:10.539560       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0416 16:45:10.539606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.125 192.168.39.80]
	I0416 16:45:10.541216       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 16:45:10.551286       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0416 16:45:10.559724       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0416 16:45:11.339139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0416 16:45:11.781356       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.125 192.168.39.80 192.168.39.97]
	W0416 16:45:21.783473       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.80 192.168.39.97]
	W0416 16:47:51.793539       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.80 192.168.39.97]
	
	
	==> kube-apiserver [95803f125e40255f8729d25cfbb9340fb6bc4d4e12039ab5b243a3aa2b32f8c9] <==
	I0416 16:44:25.009012       1 options.go:222] external host was not specified, using 192.168.39.97
	I0416 16:44:25.012042       1 server.go:148] Version: v1.29.3
	I0416 16:44:25.012102       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:44:25.731479       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0416 16:44:25.731528       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0416 16:44:25.731773       1 instance.go:297] Using reconciler: lease
	I0416 16:44:25.734522       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W0416 16:44:45.728081       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0416 16:44:45.728276       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0416 16:44:45.734607       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [04c928227a1f93fa704b1c25688d8c86a1eca2f9ae9a8b187ac2f087f5b9bd09] <==
	I0416 16:47:35.372452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="77.531µs"
	I0416 16:47:35.391441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="85.264µs"
	I0416 16:47:35.407404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="50.4µs"
	I0416 16:47:35.413589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.075µs"
	I0416 16:47:35.438226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="41.582µs"
	I0416 16:47:36.261329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="119.025µs"
	I0416 16:47:36.970556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.28µs"
	I0416 16:47:37.017210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.319µs"
	I0416 16:47:37.031488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="118.184µs"
	I0416 16:47:37.463356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="103.539878ms"
	I0416 16:47:37.463504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.462µs"
	I0416 16:47:49.260552       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-543552-m04"
	I0416 16:47:53.346366       1 event.go:376] "Event occurred" object="ha-543552-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-543552-m03 event: Removing Node ha-543552-m03 from Controller"
	E0416 16:48:03.278201       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:03.278316       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:03.278330       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:03.278340       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:03.278349       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:23.279096       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:23.279167       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:23.279180       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:23.279190       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	E0416 16:48:23.279199       1 gc_controller.go:153] "Failed to get node" err="node \"ha-543552-m03\" not found" node="ha-543552-m03"
	I0416 16:48:28.119689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.979876ms"
	I0416 16:48:28.121391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.142µs"
	
	
	==> kube-controller-manager [eafcfbd628239950b5b1bd9eca52875c807ffa643476cfbd53861fc85c2dc84f] <==
	I0416 16:44:25.968889       1 serving.go:380] Generated self-signed cert in-memory
	I0416 16:44:26.291344       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0416 16:44:26.291392       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:44:26.293333       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 16:44:26.293395       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 16:44:26.293632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0416 16:44:26.294361       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0416 16:44:46.742249       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.97:8443/healthz\": dial tcp 192.168.39.97:8443: connect: connection refused"
	
	
	==> kube-proxy [697fe1db84b5d1ea1b8876d23c20863adbc7d3cd0c1c44615a26ce55e1cebb18] <==
	E0416 16:41:44.751738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:47.822704       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:47.822863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:47.823067       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:47.823261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:47.823638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:47.823761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:53.966263       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:53.966367       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:53.966594       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:53.966747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:41:57.037502       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:41:57.037607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:06.253685       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:06.254085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:06.254286       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:06.254383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:09.325178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:09.325460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:24.687660       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:24.687911       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:27.756585       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:27.756656       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:42:30.828670       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:42:30.828809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [918c02ba99e6633f79f7fccdc945ebb27c631e0f18e51358d7a2dfbff35dbc0b] <==
	E0416 16:44:49.069280       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-543552\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 16:45:10.575126       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-543552\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0416 16:45:10.575244       1 server.go:1020] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0416 16:45:10.626343       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 16:45:10.626446       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 16:45:10.626515       1 server_others.go:168] "Using iptables Proxier"
	I0416 16:45:10.631583       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 16:45:10.631886       1 server.go:865] "Version info" version="v1.29.3"
	I0416 16:45:10.632281       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 16:45:10.633854       1 config.go:188] "Starting service config controller"
	I0416 16:45:10.634109       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 16:45:10.634184       1 config.go:97] "Starting endpoint slice config controller"
	I0416 16:45:10.634208       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 16:45:10.634263       1 config.go:315] "Starting node config controller"
	I0416 16:45:10.634301       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0416 16:45:13.644740       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0416 16:45:13.644943       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:45:13.645151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-543552&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:45:13.644935       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:45:13.645220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 16:45:13.645404       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 16:45:13.645484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0416 16:45:14.734940       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 16:45:15.035677       1 shared_informer.go:318] Caches are synced for node config
	I0416 16:45:15.235083       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [41f892ff8eaf18eda06650d052511ef168a5109d4cea97e1a722fdfe6dba17e2] <==
	W0416 16:45:03.205588       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.97:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:03.205715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.97:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.102617       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.102749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.286693       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.97:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.286760       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.97:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.317395       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.97:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.317464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.97:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:04.751421       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.97:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:04.751513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.97:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:05.223868       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:05.224133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.97:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:05.528909       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:05.529154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:06.774295       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:06.774375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.97:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	W0416 16:45:08.345476       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.97:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	E0416 16:45:08.345567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.97:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.97:8443: connect: connection refused
	I0416 16:45:23.749291       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 16:47:34.131594       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-rs7f9\": pod busybox-7fdf7869d9-rs7f9 is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-rs7f9" node="ha-543552-m04"
	E0416 16:47:34.131827       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-rs7f9\": pod busybox-7fdf7869d9-rs7f9 is already assigned to node \"ha-543552-m04\"" pod="default/busybox-7fdf7869d9-rs7f9"
	E0416 16:47:35.373388       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-2cwwc\": pod busybox-7fdf7869d9-2cwwc is already assigned to node \"ha-543552-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-2cwwc" node="ha-543552-m04"
	E0416 16:47:35.374942       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 00516116-7bb0-444b-bbfb-a70ec00ccb89(default/busybox-7fdf7869d9-2cwwc) wasn't assumed so cannot be forgotten"
	E0416 16:47:35.375093       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-2cwwc\": pod busybox-7fdf7869d9-2cwwc is already assigned to node \"ha-543552-m04\"" pod="default/busybox-7fdf7869d9-2cwwc"
	I0416 16:47:35.375129       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-2cwwc" node="ha-543552-m04"
	
	
	==> kube-scheduler [5f7d02aab74a8ed9f8a931d773e51bab23702a9ed3812c6708fe8a1a8440b6d9] <==
	E0416 16:42:44.784738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 16:42:45.356938       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 16:42:45.357100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 16:42:45.647158       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 16:42:45.647262       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 16:42:46.150484       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 16:42:46.150586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 16:42:46.166256       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 16:42:46.166336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 16:42:46.232926       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 16:42:46.233089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 16:42:46.543123       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 16:42:46.543155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 16:42:46.739242       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 16:42:46.739269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 16:42:46.930812       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 16:42:46.930840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 16:42:47.123025       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 16:42:47.123090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 16:42:47.287904       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 16:42:47.287930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0416 16:42:47.389557       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 16:42:47.389822       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 16:42:47.399072       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 16:42:47.399300       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 16 16:46:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:46:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:46:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:46:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:46:45 ha-543552 kubelet[1371]: I0416 16:46:45.381343    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:46:45 ha-543552 kubelet[1371]: E0416 16:46:45.381564    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:47:00 ha-543552 kubelet[1371]: I0416 16:47:00.381090    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:47:00 ha-543552 kubelet[1371]: E0416 16:47:00.381831    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(663f4c76-01f8-4664-9345-740540fdc41c)\"" pod="kube-system/storage-provisioner" podUID="663f4c76-01f8-4664-9345-740540fdc41c"
	Apr 16 16:47:14 ha-543552 kubelet[1371]: I0416 16:47:14.380853    1371 scope.go:117] "RemoveContainer" containerID="34ee10519485544c3bb2a3a685fa488ef602ba8413c2dddec545d205966208dd"
	Apr 16 16:47:14 ha-543552 kubelet[1371]: I0416 16:47:14.911669    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-543552" podStartSLOduration=75.911571355 podStartE2EDuration="1m15.911571355s" podCreationTimestamp="2024-04-16 16:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-16 16:46:01.401611284 +0000 UTC m=+740.236063285" watchObservedRunningTime="2024-04-16 16:47:14.911571355 +0000 UTC m=+813.746023360"
	Apr 16 16:47:41 ha-543552 kubelet[1371]: E0416 16:47:41.433676    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:47:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:47:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:47:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:47:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:48:41 ha-543552 kubelet[1371]: E0416 16:48:41.432637    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:48:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:48:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:48:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:48:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 16:49:41 ha-543552 kubelet[1371]: E0416 16:49:41.437319    1371 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 16:49:41 ha-543552 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 16:49:41 ha-543552 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 16:49:41 ha-543552 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 16:49:41 ha-543552 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 16:50:11.418356   29526 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18649-3628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-543552 -n ha-543552
helpers_test.go:261: (dbg) Run:  kubectl --context ha-543552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (307.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334221
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-334221
E0416 17:07:03.893080   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:07:10.030734   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-334221: exit status 82 (2m2.719701222s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-334221-m03"  ...
	* Stopping node "multinode-334221-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-334221" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334221 --wait=true -v=8 --alsologtostderr
E0416 17:10:13.077299   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334221 --wait=true -v=8 --alsologtostderr: (3m1.851407773s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334221
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-334221 -n multinode-334221
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-334221 logs -n 25: (1.672531998s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3051956935/001/cp-test_multinode-334221-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221:/home/docker/cp-test_multinode-334221-m02_multinode-334221.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221 sudo cat                                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m02_multinode-334221.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03:/home/docker/cp-test_multinode-334221-m02_multinode-334221-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221-m03 sudo cat                                   | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m02_multinode-334221-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp testdata/cp-test.txt                                                | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3051956935/001/cp-test_multinode-334221-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221:/home/docker/cp-test_multinode-334221-m03_multinode-334221.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221 sudo cat                                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m03_multinode-334221.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02:/home/docker/cp-test_multinode-334221-m03_multinode-334221-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221-m02 sudo cat                                   | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m03_multinode-334221-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-334221 node stop m03                                                          | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	| node    | multinode-334221 node start                                                             | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-334221                                                                | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:05 UTC |                     |
	| stop    | -p multinode-334221                                                                     | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:05 UTC |                     |
	| start   | -p multinode-334221                                                                     | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:07 UTC | 16 Apr 24 17:10 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-334221                                                                | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:07:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:07:12.905612   38726 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:07:12.905741   38726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:07:12.905752   38726 out.go:304] Setting ErrFile to fd 2...
	I0416 17:07:12.905759   38726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:07:12.905969   38726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:07:12.906525   38726 out.go:298] Setting JSON to false
	I0416 17:07:12.907430   38726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2985,"bootTime":1713284248,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:07:12.907494   38726 start.go:139] virtualization: kvm guest
	I0416 17:07:12.910000   38726 out.go:177] * [multinode-334221] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:07:12.911444   38726 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:07:12.912802   38726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:07:12.911467   38726 notify.go:220] Checking for updates...
	I0416 17:07:12.914191   38726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:07:12.915788   38726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:07:12.917154   38726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:07:12.918458   38726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:07:12.920205   38726 config.go:182] Loaded profile config "multinode-334221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:07:12.920299   38726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:07:12.920697   38726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:07:12.920741   38726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:07:12.936029   38726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0416 17:07:12.936483   38726 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:07:12.937088   38726 main.go:141] libmachine: Using API Version  1
	I0416 17:07:12.937117   38726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:07:12.937436   38726 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:07:12.937610   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:07:12.974566   38726 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:07:12.975929   38726 start.go:297] selected driver: kvm2
	I0416 17:07:12.975941   38726 start.go:901] validating driver "kvm2" against &{Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:07:12.976075   38726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:07:12.976406   38726 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:07:12.976471   38726 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:07:12.991966   38726 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:07:12.992637   38726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:07:12.992699   38726 cni.go:84] Creating CNI manager for ""
	I0416 17:07:12.992715   38726 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 17:07:12.992763   38726 start.go:340] cluster config:
	{Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-334221 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:07:12.992903   38726 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:07:12.995424   38726 out.go:177] * Starting "multinode-334221" primary control-plane node in "multinode-334221" cluster
	I0416 17:07:12.996756   38726 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:07:12.996792   38726 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:07:12.996806   38726 cache.go:56] Caching tarball of preloaded images
	I0416 17:07:12.996907   38726 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:07:12.996920   38726 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:07:12.997047   38726 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/config.json ...
	I0416 17:07:12.997256   38726 start.go:360] acquireMachinesLock for multinode-334221: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:07:12.997322   38726 start.go:364] duration metric: took 47.31µs to acquireMachinesLock for "multinode-334221"
	I0416 17:07:12.997340   38726 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:07:12.997353   38726 fix.go:54] fixHost starting: 
	I0416 17:07:12.997607   38726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:07:12.997655   38726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:07:13.011720   38726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0416 17:07:13.012174   38726 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:07:13.012671   38726 main.go:141] libmachine: Using API Version  1
	I0416 17:07:13.012698   38726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:07:13.013009   38726 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:07:13.013181   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:07:13.013343   38726 main.go:141] libmachine: (multinode-334221) Calling .GetState
	I0416 17:07:13.015079   38726 fix.go:112] recreateIfNeeded on multinode-334221: state=Running err=<nil>
	W0416 17:07:13.015113   38726 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:07:13.017345   38726 out.go:177] * Updating the running kvm2 "multinode-334221" VM ...
	I0416 17:07:13.018706   38726 machine.go:94] provisionDockerMachine start ...
	I0416 17:07:13.018725   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:07:13.019210   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.022708   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.023204   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.023237   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.023379   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.023567   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.023736   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.023885   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.024018   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.024201   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.024214   38726 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:07:13.134859   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334221
	
	I0416 17:07:13.134891   38726 main.go:141] libmachine: (multinode-334221) Calling .GetMachineName
	I0416 17:07:13.135135   38726 buildroot.go:166] provisioning hostname "multinode-334221"
	I0416 17:07:13.135163   38726 main.go:141] libmachine: (multinode-334221) Calling .GetMachineName
	I0416 17:07:13.135361   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.137937   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.138358   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.138383   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.138520   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.138692   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.138842   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.138979   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.139130   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.139283   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.139296   38726 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-334221 && echo "multinode-334221" | sudo tee /etc/hostname
	I0416 17:07:13.271297   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334221
	
	I0416 17:07:13.271322   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.274259   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.274640   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.274672   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.274860   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.275059   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.275226   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.275348   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.275483   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.275686   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.275703   38726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-334221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-334221/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-334221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:07:13.382275   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:07:13.382307   38726 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:07:13.382346   38726 buildroot.go:174] setting up certificates
	I0416 17:07:13.382364   38726 provision.go:84] configureAuth start
	I0416 17:07:13.382375   38726 main.go:141] libmachine: (multinode-334221) Calling .GetMachineName
	I0416 17:07:13.382684   38726 main.go:141] libmachine: (multinode-334221) Calling .GetIP
	I0416 17:07:13.385558   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.385934   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.385955   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.386157   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.388263   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.388629   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.388665   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.388820   38726 provision.go:143] copyHostCerts
	I0416 17:07:13.388862   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:07:13.388895   38726 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:07:13.388910   38726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:07:13.388975   38726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:07:13.389060   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:07:13.389078   38726 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:07:13.389085   38726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:07:13.389109   38726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:07:13.389154   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:07:13.389170   38726 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:07:13.389176   38726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:07:13.389196   38726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:07:13.389241   38726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.multinode-334221 san=[127.0.0.1 192.168.39.137 localhost minikube multinode-334221]
	I0416 17:07:13.491044   38726 provision.go:177] copyRemoteCerts
	I0416 17:07:13.491102   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:07:13.491134   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.493772   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.494120   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.494156   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.494302   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.494486   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.494644   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.494772   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:07:13.576629   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 17:07:13.576691   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 17:07:13.609484   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 17:07:13.609547   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:07:13.638084   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 17:07:13.638145   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:07:13.668815   38726 provision.go:87] duration metric: took 286.437773ms to configureAuth
	I0416 17:07:13.668865   38726 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:07:13.669075   38726 config.go:182] Loaded profile config "multinode-334221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:07:13.669144   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.671938   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.672313   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.672339   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.672521   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.672713   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.672872   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.672994   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.673117   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.673286   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.673306   38726 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:08:44.605949   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:08:44.605977   38726 machine.go:97] duration metric: took 1m31.587259535s to provisionDockerMachine
	I0416 17:08:44.605992   38726 start.go:293] postStartSetup for "multinode-334221" (driver="kvm2")
	I0416 17:08:44.606005   38726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:08:44.606027   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.606377   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:08:44.606422   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.609152   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.609529   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.609559   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.609683   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.609871   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.610036   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.610192   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:08:44.716426   38726 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:08:44.721284   38726 command_runner.go:130] > NAME=Buildroot
	I0416 17:08:44.721302   38726 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:08:44.721307   38726 command_runner.go:130] > ID=buildroot
	I0416 17:08:44.721311   38726 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:08:44.721316   38726 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:08:44.721550   38726 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:08:44.721574   38726 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:08:44.721629   38726 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:08:44.721711   38726 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:08:44.721722   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 17:08:44.721814   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:08:44.734372   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:08:44.764101   38726 start.go:296] duration metric: took 158.096587ms for postStartSetup
	I0416 17:08:44.764144   38726 fix.go:56] duration metric: took 1m31.766797827s for fixHost
	I0416 17:08:44.764162   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.766836   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.767312   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.767342   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.767461   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.767655   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.767837   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.768021   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.768211   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:08:44.768361   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:08:44.768371   38726 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:08:44.870116   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713287324.852816822
	
	I0416 17:08:44.870147   38726 fix.go:216] guest clock: 1713287324.852816822
	I0416 17:08:44.870158   38726 fix.go:229] Guest: 2024-04-16 17:08:44.852816822 +0000 UTC Remote: 2024-04-16 17:08:44.764148197 +0000 UTC m=+91.905067067 (delta=88.668625ms)
	I0416 17:08:44.870186   38726 fix.go:200] guest clock delta is within tolerance: 88.668625ms
	I0416 17:08:44.870193   38726 start.go:83] releasing machines lock for "multinode-334221", held for 1m31.872860229s
	I0416 17:08:44.870218   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.870487   38726 main.go:141] libmachine: (multinode-334221) Calling .GetIP
	I0416 17:08:44.872873   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.873217   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.873240   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.873408   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.874047   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.874238   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.874319   38726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:08:44.874364   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.874468   38726 ssh_runner.go:195] Run: cat /version.json
	I0416 17:08:44.874492   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.876878   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877168   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.877195   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877218   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877358   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.877529   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.877676   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.877682   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.877705   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877837   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.877849   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:08:44.877947   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.878091   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.878297   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:08:44.974557   38726 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 17:08:44.975292   38726 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 17:08:44.975444   38726 ssh_runner.go:195] Run: systemctl --version
	I0416 17:08:44.983215   38726 command_runner.go:130] > systemd 252 (252)
	I0416 17:08:44.983247   38726 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 17:08:44.983505   38726 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:08:45.152810   38726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 17:08:45.162874   38726 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 17:08:45.163320   38726 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:08:45.163399   38726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:08:45.173978   38726 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 17:08:45.174017   38726 start.go:494] detecting cgroup driver to use...
	I0416 17:08:45.174094   38726 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:08:45.192938   38726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:08:45.209118   38726 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:08:45.209178   38726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:08:45.224855   38726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:08:45.241963   38726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:08:45.402793   38726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:08:45.558710   38726 docker.go:233] disabling docker service ...
	I0416 17:08:45.558800   38726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:08:45.575579   38726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:08:45.590194   38726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:08:45.732928   38726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:08:45.886383   38726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:08:45.901910   38726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:08:45.924793   38726 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0416 17:08:45.924849   38726 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:08:45.924905   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.936588   38726 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:08:45.936654   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.948577   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.960266   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.973162   38726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:08:45.985110   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.997105   38726 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:46.011301   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:46.023318   38726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:08:46.033633   38726 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 17:08:46.033697   38726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:08:46.043983   38726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:08:46.186854   38726 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:08:46.444495   38726 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:08:46.444569   38726 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:08:46.451729   38726 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0416 17:08:46.451751   38726 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 17:08:46.451757   38726 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0416 17:08:46.451764   38726 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:08:46.451769   38726 command_runner.go:130] > Access: 2024-04-16 17:08:46.388176059 +0000
	I0416 17:08:46.451786   38726 command_runner.go:130] > Modify: 2024-04-16 17:08:46.316172959 +0000
	I0416 17:08:46.451794   38726 command_runner.go:130] > Change: 2024-04-16 17:08:46.316172959 +0000
	I0416 17:08:46.451803   38726 command_runner.go:130] >  Birth: -
	I0416 17:08:46.451822   38726 start.go:562] Will wait 60s for crictl version
	I0416 17:08:46.451886   38726 ssh_runner.go:195] Run: which crictl
	I0416 17:08:46.456722   38726 command_runner.go:130] > /usr/bin/crictl
	I0416 17:08:46.456797   38726 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:08:46.499963   38726 command_runner.go:130] > Version:  0.1.0
	I0416 17:08:46.499984   38726 command_runner.go:130] > RuntimeName:  cri-o
	I0416 17:08:46.499989   38726 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0416 17:08:46.499994   38726 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 17:08:46.500148   38726 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:08:46.500226   38726 ssh_runner.go:195] Run: crio --version
	I0416 17:08:46.530148   38726 command_runner.go:130] > crio version 1.29.1
	I0416 17:08:46.530169   38726 command_runner.go:130] > Version:        1.29.1
	I0416 17:08:46.530175   38726 command_runner.go:130] > GitCommit:      unknown
	I0416 17:08:46.530179   38726 command_runner.go:130] > GitCommitDate:  unknown
	I0416 17:08:46.530183   38726 command_runner.go:130] > GitTreeState:   clean
	I0416 17:08:46.530189   38726 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0416 17:08:46.530193   38726 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 17:08:46.530196   38726 command_runner.go:130] > Compiler:       gc
	I0416 17:08:46.530201   38726 command_runner.go:130] > Platform:       linux/amd64
	I0416 17:08:46.530205   38726 command_runner.go:130] > Linkmode:       dynamic
	I0416 17:08:46.530222   38726 command_runner.go:130] > BuildTags:      
	I0416 17:08:46.530226   38726 command_runner.go:130] >   containers_image_ostree_stub
	I0416 17:08:46.530233   38726 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 17:08:46.530237   38726 command_runner.go:130] >   btrfs_noversion
	I0416 17:08:46.530241   38726 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 17:08:46.530246   38726 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 17:08:46.530249   38726 command_runner.go:130] >   seccomp
	I0416 17:08:46.530253   38726 command_runner.go:130] > LDFlags:          unknown
	I0416 17:08:46.530261   38726 command_runner.go:130] > SeccompEnabled:   true
	I0416 17:08:46.530265   38726 command_runner.go:130] > AppArmorEnabled:  false
	I0416 17:08:46.531566   38726 ssh_runner.go:195] Run: crio --version
	I0416 17:08:46.563207   38726 command_runner.go:130] > crio version 1.29.1
	I0416 17:08:46.563228   38726 command_runner.go:130] > Version:        1.29.1
	I0416 17:08:46.563233   38726 command_runner.go:130] > GitCommit:      unknown
	I0416 17:08:46.563238   38726 command_runner.go:130] > GitCommitDate:  unknown
	I0416 17:08:46.563242   38726 command_runner.go:130] > GitTreeState:   clean
	I0416 17:08:46.563247   38726 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0416 17:08:46.563251   38726 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 17:08:46.563255   38726 command_runner.go:130] > Compiler:       gc
	I0416 17:08:46.563260   38726 command_runner.go:130] > Platform:       linux/amd64
	I0416 17:08:46.563264   38726 command_runner.go:130] > Linkmode:       dynamic
	I0416 17:08:46.563268   38726 command_runner.go:130] > BuildTags:      
	I0416 17:08:46.563273   38726 command_runner.go:130] >   containers_image_ostree_stub
	I0416 17:08:46.563277   38726 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 17:08:46.563281   38726 command_runner.go:130] >   btrfs_noversion
	I0416 17:08:46.563286   38726 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 17:08:46.563291   38726 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 17:08:46.563299   38726 command_runner.go:130] >   seccomp
	I0416 17:08:46.563305   38726 command_runner.go:130] > LDFlags:          unknown
	I0416 17:08:46.563311   38726 command_runner.go:130] > SeccompEnabled:   true
	I0416 17:08:46.563318   38726 command_runner.go:130] > AppArmorEnabled:  false
	I0416 17:08:46.567393   38726 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:08:46.568891   38726 main.go:141] libmachine: (multinode-334221) Calling .GetIP
	I0416 17:08:46.571433   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:46.571781   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:46.571812   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:46.572034   38726 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:08:46.576590   38726 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0416 17:08:46.576729   38726 kubeadm.go:877] updating cluster {Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:08:46.576873   38726 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:08:46.576920   38726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:08:46.619187   38726 command_runner.go:130] > {
	I0416 17:08:46.619205   38726 command_runner.go:130] >   "images": [
	I0416 17:08:46.619209   38726 command_runner.go:130] >     {
	I0416 17:08:46.619217   38726 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 17:08:46.619221   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619228   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 17:08:46.619231   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619236   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619244   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 17:08:46.619253   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 17:08:46.619256   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619262   38726 command_runner.go:130] >       "size": "65291810",
	I0416 17:08:46.619266   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619270   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619279   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619284   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619289   38726 command_runner.go:130] >     },
	I0416 17:08:46.619293   38726 command_runner.go:130] >     {
	I0416 17:08:46.619306   38726 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 17:08:46.619310   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619315   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 17:08:46.619319   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619325   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619332   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 17:08:46.619344   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 17:08:46.619348   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619352   38726 command_runner.go:130] >       "size": "1363676",
	I0416 17:08:46.619356   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619362   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619367   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619371   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619375   38726 command_runner.go:130] >     },
	I0416 17:08:46.619378   38726 command_runner.go:130] >     {
	I0416 17:08:46.619385   38726 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 17:08:46.619389   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619394   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 17:08:46.619398   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619402   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619410   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 17:08:46.619418   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 17:08:46.619422   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619430   38726 command_runner.go:130] >       "size": "31470524",
	I0416 17:08:46.619434   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619438   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619441   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619445   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619449   38726 command_runner.go:130] >     },
	I0416 17:08:46.619452   38726 command_runner.go:130] >     {
	I0416 17:08:46.619458   38726 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 17:08:46.619464   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619469   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 17:08:46.619474   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619477   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619485   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 17:08:46.619496   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 17:08:46.619501   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619505   38726 command_runner.go:130] >       "size": "61245718",
	I0416 17:08:46.619508   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619513   38726 command_runner.go:130] >       "username": "nonroot",
	I0416 17:08:46.619517   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619521   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619527   38726 command_runner.go:130] >     },
	I0416 17:08:46.619531   38726 command_runner.go:130] >     {
	I0416 17:08:46.619536   38726 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 17:08:46.619543   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619547   38726 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 17:08:46.619553   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619557   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619566   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 17:08:46.619575   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 17:08:46.619581   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619585   38726 command_runner.go:130] >       "size": "150779692",
	I0416 17:08:46.619591   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619595   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619600   38726 command_runner.go:130] >       },
	I0416 17:08:46.619604   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619608   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619612   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619615   38726 command_runner.go:130] >     },
	I0416 17:08:46.619619   38726 command_runner.go:130] >     {
	I0416 17:08:46.619625   38726 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 17:08:46.619629   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619634   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 17:08:46.619647   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619651   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619657   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 17:08:46.619664   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 17:08:46.619666   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619670   38726 command_runner.go:130] >       "size": "128508878",
	I0416 17:08:46.619674   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619678   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619681   38726 command_runner.go:130] >       },
	I0416 17:08:46.619685   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619688   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619692   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619695   38726 command_runner.go:130] >     },
	I0416 17:08:46.619700   38726 command_runner.go:130] >     {
	I0416 17:08:46.619706   38726 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 17:08:46.619713   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619718   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 17:08:46.619723   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619727   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619736   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 17:08:46.619746   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 17:08:46.619752   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619756   38726 command_runner.go:130] >       "size": "123142962",
	I0416 17:08:46.619762   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619767   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619772   38726 command_runner.go:130] >       },
	I0416 17:08:46.619776   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619780   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619786   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619790   38726 command_runner.go:130] >     },
	I0416 17:08:46.619795   38726 command_runner.go:130] >     {
	I0416 17:08:46.619801   38726 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 17:08:46.619808   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619813   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 17:08:46.619818   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619822   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619838   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 17:08:46.619847   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 17:08:46.619851   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619855   38726 command_runner.go:130] >       "size": "83634073",
	I0416 17:08:46.619859   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619862   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619866   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619870   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619873   38726 command_runner.go:130] >     },
	I0416 17:08:46.619876   38726 command_runner.go:130] >     {
	I0416 17:08:46.619882   38726 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 17:08:46.619885   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619890   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 17:08:46.619894   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619898   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619905   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 17:08:46.619912   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 17:08:46.619916   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619919   38726 command_runner.go:130] >       "size": "60724018",
	I0416 17:08:46.619923   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619926   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619929   38726 command_runner.go:130] >       },
	I0416 17:08:46.619933   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619936   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619940   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619943   38726 command_runner.go:130] >     },
	I0416 17:08:46.619947   38726 command_runner.go:130] >     {
	I0416 17:08:46.619953   38726 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 17:08:46.619956   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619960   38726 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 17:08:46.619963   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619966   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619973   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 17:08:46.619979   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 17:08:46.619983   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619987   38726 command_runner.go:130] >       "size": "750414",
	I0416 17:08:46.619991   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619994   38726 command_runner.go:130] >         "value": "65535"
	I0416 17:08:46.619998   38726 command_runner.go:130] >       },
	I0416 17:08:46.620002   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.620006   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.620010   38726 command_runner.go:130] >       "pinned": true
	I0416 17:08:46.620013   38726 command_runner.go:130] >     }
	I0416 17:08:46.620016   38726 command_runner.go:130] >   ]
	I0416 17:08:46.620019   38726 command_runner.go:130] > }
	I0416 17:08:46.620384   38726 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:08:46.620395   38726 crio.go:433] Images already preloaded, skipping extraction
	I0416 17:08:46.620435   38726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:08:46.654222   38726 command_runner.go:130] > {
	I0416 17:08:46.654239   38726 command_runner.go:130] >   "images": [
	I0416 17:08:46.654243   38726 command_runner.go:130] >     {
	I0416 17:08:46.654252   38726 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 17:08:46.654259   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654265   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 17:08:46.654269   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654275   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654283   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 17:08:46.654291   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 17:08:46.654300   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654308   38726 command_runner.go:130] >       "size": "65291810",
	I0416 17:08:46.654312   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654316   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654333   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654340   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654344   38726 command_runner.go:130] >     },
	I0416 17:08:46.654349   38726 command_runner.go:130] >     {
	I0416 17:08:46.654355   38726 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 17:08:46.654361   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654367   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 17:08:46.654373   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654378   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654387   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 17:08:46.654396   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 17:08:46.654401   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654406   38726 command_runner.go:130] >       "size": "1363676",
	I0416 17:08:46.654412   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654418   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654424   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654428   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654436   38726 command_runner.go:130] >     },
	I0416 17:08:46.654442   38726 command_runner.go:130] >     {
	I0416 17:08:46.654447   38726 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 17:08:46.654453   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654459   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 17:08:46.654465   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654469   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654479   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 17:08:46.654488   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 17:08:46.654494   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654499   38726 command_runner.go:130] >       "size": "31470524",
	I0416 17:08:46.654505   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654508   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654515   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654519   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654524   38726 command_runner.go:130] >     },
	I0416 17:08:46.654528   38726 command_runner.go:130] >     {
	I0416 17:08:46.654536   38726 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 17:08:46.654542   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654547   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 17:08:46.654553   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654557   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654567   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 17:08:46.654580   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 17:08:46.654586   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654590   38726 command_runner.go:130] >       "size": "61245718",
	I0416 17:08:46.654596   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654601   38726 command_runner.go:130] >       "username": "nonroot",
	I0416 17:08:46.654609   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654616   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654619   38726 command_runner.go:130] >     },
	I0416 17:08:46.654626   38726 command_runner.go:130] >     {
	I0416 17:08:46.654632   38726 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 17:08:46.654638   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654643   38726 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 17:08:46.654646   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654651   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654661   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 17:08:46.654667   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 17:08:46.654673   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654677   38726 command_runner.go:130] >       "size": "150779692",
	I0416 17:08:46.654680   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.654684   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.654687   38726 command_runner.go:130] >       },
	I0416 17:08:46.654691   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654695   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654701   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654705   38726 command_runner.go:130] >     },
	I0416 17:08:46.654709   38726 command_runner.go:130] >     {
	I0416 17:08:46.654716   38726 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 17:08:46.654722   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654727   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 17:08:46.654733   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654737   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654743   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 17:08:46.654752   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 17:08:46.654758   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654762   38726 command_runner.go:130] >       "size": "128508878",
	I0416 17:08:46.654768   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.654772   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.654778   38726 command_runner.go:130] >       },
	I0416 17:08:46.654782   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654788   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654791   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654797   38726 command_runner.go:130] >     },
	I0416 17:08:46.654800   38726 command_runner.go:130] >     {
	I0416 17:08:46.654808   38726 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 17:08:46.654815   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654820   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 17:08:46.654826   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654830   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654837   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 17:08:46.654847   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 17:08:46.654856   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654861   38726 command_runner.go:130] >       "size": "123142962",
	I0416 17:08:46.654867   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.654871   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.654874   38726 command_runner.go:130] >       },
	I0416 17:08:46.654878   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654882   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654885   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654888   38726 command_runner.go:130] >     },
	I0416 17:08:46.654891   38726 command_runner.go:130] >     {
	I0416 17:08:46.654897   38726 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 17:08:46.654903   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654908   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 17:08:46.654914   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654917   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654929   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 17:08:46.654938   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 17:08:46.654941   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654948   38726 command_runner.go:130] >       "size": "83634073",
	I0416 17:08:46.654955   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654959   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654965   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654969   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654975   38726 command_runner.go:130] >     },
	I0416 17:08:46.654978   38726 command_runner.go:130] >     {
	I0416 17:08:46.654987   38726 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 17:08:46.654991   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654996   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 17:08:46.655002   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655006   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.655015   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 17:08:46.655026   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 17:08:46.655032   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655037   38726 command_runner.go:130] >       "size": "60724018",
	I0416 17:08:46.655043   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.655047   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.655053   38726 command_runner.go:130] >       },
	I0416 17:08:46.655058   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.655064   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.655068   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.655071   38726 command_runner.go:130] >     },
	I0416 17:08:46.655074   38726 command_runner.go:130] >     {
	I0416 17:08:46.655083   38726 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 17:08:46.655087   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.655094   38726 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 17:08:46.655097   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655101   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.655109   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 17:08:46.655121   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 17:08:46.655127   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655131   38726 command_runner.go:130] >       "size": "750414",
	I0416 17:08:46.655135   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.655141   38726 command_runner.go:130] >         "value": "65535"
	I0416 17:08:46.655145   38726 command_runner.go:130] >       },
	I0416 17:08:46.655149   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.655153   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.655160   38726 command_runner.go:130] >       "pinned": true
	I0416 17:08:46.655163   38726 command_runner.go:130] >     }
	I0416 17:08:46.655166   38726 command_runner.go:130] >   ]
	I0416 17:08:46.655171   38726 command_runner.go:130] > }
	I0416 17:08:46.655635   38726 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:08:46.655649   38726 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:08:46.655657   38726 kubeadm.go:928] updating node { 192.168.39.137 8443 v1.29.3 crio true true} ...
	I0416 17:08:46.655746   38726 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-334221 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:08:46.655804   38726 ssh_runner.go:195] Run: crio config
	I0416 17:08:46.698698   38726 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0416 17:08:46.698721   38726 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0416 17:08:46.698728   38726 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0416 17:08:46.698732   38726 command_runner.go:130] > #
	I0416 17:08:46.698739   38726 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0416 17:08:46.698745   38726 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0416 17:08:46.698750   38726 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0416 17:08:46.698757   38726 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0416 17:08:46.698761   38726 command_runner.go:130] > # reload'.
	I0416 17:08:46.698767   38726 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0416 17:08:46.698776   38726 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0416 17:08:46.698786   38726 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0416 17:08:46.698794   38726 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0416 17:08:46.698806   38726 command_runner.go:130] > [crio]
	I0416 17:08:46.698815   38726 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0416 17:08:46.698824   38726 command_runner.go:130] > # containers images, in this directory.
	I0416 17:08:46.698829   38726 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0416 17:08:46.698848   38726 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0416 17:08:46.698856   38726 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0416 17:08:46.698865   38726 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0416 17:08:46.698872   38726 command_runner.go:130] > # imagestore = ""
	I0416 17:08:46.698881   38726 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0416 17:08:46.698895   38726 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0416 17:08:46.698905   38726 command_runner.go:130] > storage_driver = "overlay"
	I0416 17:08:46.698912   38726 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0416 17:08:46.698918   38726 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0416 17:08:46.698922   38726 command_runner.go:130] > storage_option = [
	I0416 17:08:46.698927   38726 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0416 17:08:46.698930   38726 command_runner.go:130] > ]
	I0416 17:08:46.698938   38726 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0416 17:08:46.698943   38726 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0416 17:08:46.698950   38726 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0416 17:08:46.698955   38726 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0416 17:08:46.698961   38726 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0416 17:08:46.698969   38726 command_runner.go:130] > # always happen on a node reboot
	I0416 17:08:46.698973   38726 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0416 17:08:46.698984   38726 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0416 17:08:46.698990   38726 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0416 17:08:46.698995   38726 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0416 17:08:46.699000   38726 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0416 17:08:46.699008   38726 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0416 17:08:46.699017   38726 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0416 17:08:46.699022   38726 command_runner.go:130] > # internal_wipe = true
	I0416 17:08:46.699030   38726 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0416 17:08:46.699042   38726 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0416 17:08:46.699052   38726 command_runner.go:130] > # internal_repair = false
	I0416 17:08:46.699061   38726 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0416 17:08:46.699074   38726 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0416 17:08:46.699083   38726 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0416 17:08:46.699089   38726 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0416 17:08:46.699097   38726 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0416 17:08:46.699100   38726 command_runner.go:130] > [crio.api]
	I0416 17:08:46.699105   38726 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0416 17:08:46.699114   38726 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0416 17:08:46.699129   38726 command_runner.go:130] > # IP address on which the stream server will listen.
	I0416 17:08:46.699143   38726 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0416 17:08:46.699158   38726 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0416 17:08:46.699169   38726 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0416 17:08:46.699177   38726 command_runner.go:130] > # stream_port = "0"
	I0416 17:08:46.699186   38726 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0416 17:08:46.699195   38726 command_runner.go:130] > # stream_enable_tls = false
	I0416 17:08:46.699204   38726 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0416 17:08:46.699213   38726 command_runner.go:130] > # stream_idle_timeout = ""
	I0416 17:08:46.699224   38726 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0416 17:08:46.699238   38726 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0416 17:08:46.699247   38726 command_runner.go:130] > # minutes.
	I0416 17:08:46.699253   38726 command_runner.go:130] > # stream_tls_cert = ""
	I0416 17:08:46.699266   38726 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0416 17:08:46.699277   38726 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0416 17:08:46.699283   38726 command_runner.go:130] > # stream_tls_key = ""
	I0416 17:08:46.699292   38726 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0416 17:08:46.699323   38726 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0416 17:08:46.699338   38726 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0416 17:08:46.699348   38726 command_runner.go:130] > # stream_tls_ca = ""
	I0416 17:08:46.699360   38726 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 17:08:46.699370   38726 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0416 17:08:46.699421   38726 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 17:08:46.699448   38726 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0416 17:08:46.699460   38726 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0416 17:08:46.699469   38726 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0416 17:08:46.699480   38726 command_runner.go:130] > [crio.runtime]
	I0416 17:08:46.699493   38726 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0416 17:08:46.699504   38726 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0416 17:08:46.699512   38726 command_runner.go:130] > # "nofile=1024:2048"
	I0416 17:08:46.699522   38726 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0416 17:08:46.699532   38726 command_runner.go:130] > # default_ulimits = [
	I0416 17:08:46.699538   38726 command_runner.go:130] > # ]
	I0416 17:08:46.699546   38726 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0416 17:08:46.699556   38726 command_runner.go:130] > # no_pivot = false
	I0416 17:08:46.699565   38726 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0416 17:08:46.699579   38726 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0416 17:08:46.699600   38726 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0416 17:08:46.699615   38726 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0416 17:08:46.699627   38726 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0416 17:08:46.699642   38726 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 17:08:46.699649   38726 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0416 17:08:46.699660   38726 command_runner.go:130] > # Cgroup setting for conmon
	I0416 17:08:46.699670   38726 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0416 17:08:46.699682   38726 command_runner.go:130] > conmon_cgroup = "pod"
	I0416 17:08:46.699693   38726 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0416 17:08:46.699704   38726 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0416 17:08:46.699718   38726 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 17:08:46.699729   38726 command_runner.go:130] > conmon_env = [
	I0416 17:08:46.699739   38726 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 17:08:46.699747   38726 command_runner.go:130] > ]
	I0416 17:08:46.699756   38726 command_runner.go:130] > # Additional environment variables to set for all the
	I0416 17:08:46.699767   38726 command_runner.go:130] > # containers. These are overridden if set in the
	I0416 17:08:46.699777   38726 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0416 17:08:46.699788   38726 command_runner.go:130] > # default_env = [
	I0416 17:08:46.699796   38726 command_runner.go:130] > # ]
	I0416 17:08:46.699805   38726 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0416 17:08:46.699819   38726 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0416 17:08:46.699828   38726 command_runner.go:130] > # selinux = false
	I0416 17:08:46.699838   38726 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0416 17:08:46.699850   38726 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0416 17:08:46.699859   38726 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0416 17:08:46.699863   38726 command_runner.go:130] > # seccomp_profile = ""
	I0416 17:08:46.699868   38726 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0416 17:08:46.699877   38726 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0416 17:08:46.699886   38726 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0416 17:08:46.699897   38726 command_runner.go:130] > # which might increase security.
	I0416 17:08:46.699905   38726 command_runner.go:130] > # This option is currently deprecated,
	I0416 17:08:46.699918   38726 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0416 17:08:46.699928   38726 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0416 17:08:46.699939   38726 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0416 17:08:46.699955   38726 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0416 17:08:46.699967   38726 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0416 17:08:46.699979   38726 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0416 17:08:46.699990   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.700004   38726 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0416 17:08:46.700020   38726 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0416 17:08:46.700030   38726 command_runner.go:130] > # the cgroup blockio controller.
	I0416 17:08:46.700037   38726 command_runner.go:130] > # blockio_config_file = ""
	I0416 17:08:46.700050   38726 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0416 17:08:46.700059   38726 command_runner.go:130] > # blockio parameters.
	I0416 17:08:46.700066   38726 command_runner.go:130] > # blockio_reload = false
	I0416 17:08:46.700080   38726 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0416 17:08:46.700091   38726 command_runner.go:130] > # irqbalance daemon.
	I0416 17:08:46.700103   38726 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0416 17:08:46.700113   38726 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0416 17:08:46.700140   38726 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0416 17:08:46.700148   38726 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0416 17:08:46.700160   38726 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0416 17:08:46.700173   38726 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0416 17:08:46.700185   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.700193   38726 command_runner.go:130] > # rdt_config_file = ""
	I0416 17:08:46.700203   38726 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0416 17:08:46.700214   38726 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0416 17:08:46.700237   38726 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0416 17:08:46.700248   38726 command_runner.go:130] > # separate_pull_cgroup = ""
	I0416 17:08:46.700258   38726 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0416 17:08:46.700271   38726 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0416 17:08:46.700280   38726 command_runner.go:130] > # will be added.
	I0416 17:08:46.700287   38726 command_runner.go:130] > # default_capabilities = [
	I0416 17:08:46.700296   38726 command_runner.go:130] > # 	"CHOWN",
	I0416 17:08:46.700303   38726 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0416 17:08:46.700310   38726 command_runner.go:130] > # 	"FSETID",
	I0416 17:08:46.700315   38726 command_runner.go:130] > # 	"FOWNER",
	I0416 17:08:46.700318   38726 command_runner.go:130] > # 	"SETGID",
	I0416 17:08:46.700322   38726 command_runner.go:130] > # 	"SETUID",
	I0416 17:08:46.700326   38726 command_runner.go:130] > # 	"SETPCAP",
	I0416 17:08:46.700330   38726 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0416 17:08:46.700334   38726 command_runner.go:130] > # 	"KILL",
	I0416 17:08:46.700345   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700359   38726 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0416 17:08:46.700373   38726 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0416 17:08:46.700384   38726 command_runner.go:130] > # add_inheritable_capabilities = false
	I0416 17:08:46.700397   38726 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0416 17:08:46.700409   38726 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 17:08:46.700419   38726 command_runner.go:130] > default_sysctls = [
	I0416 17:08:46.700431   38726 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0416 17:08:46.700439   38726 command_runner.go:130] > ]
	I0416 17:08:46.700447   38726 command_runner.go:130] > # List of devices on the host that a
	I0416 17:08:46.700459   38726 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0416 17:08:46.700469   38726 command_runner.go:130] > # allowed_devices = [
	I0416 17:08:46.700475   38726 command_runner.go:130] > # 	"/dev/fuse",
	I0416 17:08:46.700484   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700491   38726 command_runner.go:130] > # List of additional devices. specified as
	I0416 17:08:46.700505   38726 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0416 17:08:46.700513   38726 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0416 17:08:46.700526   38726 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 17:08:46.700536   38726 command_runner.go:130] > # additional_devices = [
	I0416 17:08:46.700542   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700556   38726 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0416 17:08:46.700566   38726 command_runner.go:130] > # cdi_spec_dirs = [
	I0416 17:08:46.700571   38726 command_runner.go:130] > # 	"/etc/cdi",
	I0416 17:08:46.700578   38726 command_runner.go:130] > # 	"/var/run/cdi",
	I0416 17:08:46.700583   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700596   38726 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0416 17:08:46.700606   38726 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0416 17:08:46.700615   38726 command_runner.go:130] > # Defaults to false.
	I0416 17:08:46.700623   38726 command_runner.go:130] > # device_ownership_from_security_context = false
	I0416 17:08:46.700636   38726 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0416 17:08:46.700649   38726 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0416 17:08:46.700658   38726 command_runner.go:130] > # hooks_dir = [
	I0416 17:08:46.700666   38726 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0416 17:08:46.700675   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700684   38726 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0416 17:08:46.700697   38726 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0416 17:08:46.700706   38726 command_runner.go:130] > # its default mounts from the following two files:
	I0416 17:08:46.700714   38726 command_runner.go:130] > #
	I0416 17:08:46.700724   38726 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0416 17:08:46.700737   38726 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0416 17:08:46.700748   38726 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0416 17:08:46.700756   38726 command_runner.go:130] > #
	I0416 17:08:46.700765   38726 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0416 17:08:46.700779   38726 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0416 17:08:46.700792   38726 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0416 17:08:46.700803   38726 command_runner.go:130] > #      only add mounts it finds in this file.
	I0416 17:08:46.700810   38726 command_runner.go:130] > #
	I0416 17:08:46.700816   38726 command_runner.go:130] > # default_mounts_file = ""
	I0416 17:08:46.700824   38726 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0416 17:08:46.700852   38726 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0416 17:08:46.700863   38726 command_runner.go:130] > pids_limit = 1024
	I0416 17:08:46.700873   38726 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0416 17:08:46.700885   38726 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0416 17:08:46.700897   38726 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0416 17:08:46.700912   38726 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0416 17:08:46.700922   38726 command_runner.go:130] > # log_size_max = -1
	I0416 17:08:46.700933   38726 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0416 17:08:46.700944   38726 command_runner.go:130] > # log_to_journald = false
	I0416 17:08:46.700953   38726 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0416 17:08:46.700964   38726 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0416 17:08:46.700971   38726 command_runner.go:130] > # Path to directory for container attach sockets.
	I0416 17:08:46.700981   38726 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0416 17:08:46.700986   38726 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0416 17:08:46.700996   38726 command_runner.go:130] > # bind_mount_prefix = ""
	I0416 17:08:46.701005   38726 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0416 17:08:46.701015   38726 command_runner.go:130] > # read_only = false
	I0416 17:08:46.701024   38726 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0416 17:08:46.701037   38726 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0416 17:08:46.701048   38726 command_runner.go:130] > # live configuration reload.
	I0416 17:08:46.701054   38726 command_runner.go:130] > # log_level = "info"
	I0416 17:08:46.701066   38726 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0416 17:08:46.701078   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.701090   38726 command_runner.go:130] > # log_filter = ""
	I0416 17:08:46.701103   38726 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0416 17:08:46.701122   38726 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0416 17:08:46.701131   38726 command_runner.go:130] > # separated by comma.
	I0416 17:08:46.701144   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701154   38726 command_runner.go:130] > # uid_mappings = ""
	I0416 17:08:46.701164   38726 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0416 17:08:46.701177   38726 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0416 17:08:46.701186   38726 command_runner.go:130] > # separated by comma.
	I0416 17:08:46.701198   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701208   38726 command_runner.go:130] > # gid_mappings = ""
	I0416 17:08:46.701219   38726 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0416 17:08:46.701232   38726 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 17:08:46.701248   38726 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 17:08:46.701264   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701273   38726 command_runner.go:130] > # minimum_mappable_uid = -1
	I0416 17:08:46.701282   38726 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0416 17:08:46.701295   38726 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 17:08:46.701306   38726 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 17:08:46.701320   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701329   38726 command_runner.go:130] > # minimum_mappable_gid = -1
	I0416 17:08:46.701338   38726 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0416 17:08:46.701352   38726 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0416 17:08:46.701364   38726 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0416 17:08:46.701373   38726 command_runner.go:130] > # ctr_stop_timeout = 30
	I0416 17:08:46.701385   38726 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0416 17:08:46.701393   38726 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0416 17:08:46.701400   38726 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0416 17:08:46.701409   38726 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0416 17:08:46.701419   38726 command_runner.go:130] > drop_infra_ctr = false
	I0416 17:08:46.701433   38726 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0416 17:08:46.701445   38726 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0416 17:08:46.701459   38726 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0416 17:08:46.701469   38726 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0416 17:08:46.701478   38726 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0416 17:08:46.701487   38726 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0416 17:08:46.701497   38726 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0416 17:08:46.701509   38726 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0416 17:08:46.701519   38726 command_runner.go:130] > # shared_cpuset = ""
	I0416 17:08:46.701529   38726 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0416 17:08:46.701540   38726 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0416 17:08:46.701550   38726 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0416 17:08:46.701564   38726 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0416 17:08:46.701572   38726 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0416 17:08:46.701582   38726 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0416 17:08:46.701595   38726 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0416 17:08:46.701605   38726 command_runner.go:130] > # enable_criu_support = false
	I0416 17:08:46.701613   38726 command_runner.go:130] > # Enable/disable the generation of the container,
	I0416 17:08:46.701629   38726 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0416 17:08:46.701638   38726 command_runner.go:130] > # enable_pod_events = false
	I0416 17:08:46.701648   38726 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 17:08:46.701657   38726 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 17:08:46.701663   38726 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0416 17:08:46.701673   38726 command_runner.go:130] > # default_runtime = "runc"
	I0416 17:08:46.701682   38726 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0416 17:08:46.701695   38726 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0416 17:08:46.701712   38726 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0416 17:08:46.701723   38726 command_runner.go:130] > # creation as a file is not desired either.
	I0416 17:08:46.701738   38726 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0416 17:08:46.701746   38726 command_runner.go:130] > # the hostname is being managed dynamically.
	I0416 17:08:46.701753   38726 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0416 17:08:46.701762   38726 command_runner.go:130] > # ]
	I0416 17:08:46.701772   38726 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0416 17:08:46.701786   38726 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0416 17:08:46.701798   38726 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0416 17:08:46.701809   38726 command_runner.go:130] > # Each entry in the table should follow the format:
	I0416 17:08:46.701817   38726 command_runner.go:130] > #
	I0416 17:08:46.701825   38726 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0416 17:08:46.701833   38726 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0416 17:08:46.701882   38726 command_runner.go:130] > # runtime_type = "oci"
	I0416 17:08:46.701898   38726 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0416 17:08:46.701906   38726 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0416 17:08:46.701913   38726 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0416 17:08:46.701921   38726 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0416 17:08:46.701926   38726 command_runner.go:130] > # monitor_env = []
	I0416 17:08:46.701937   38726 command_runner.go:130] > # privileged_without_host_devices = false
	I0416 17:08:46.701946   38726 command_runner.go:130] > # allowed_annotations = []
	I0416 17:08:46.701959   38726 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0416 17:08:46.701968   38726 command_runner.go:130] > # Where:
	I0416 17:08:46.701979   38726 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0416 17:08:46.701992   38726 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0416 17:08:46.702002   38726 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0416 17:08:46.702011   38726 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0416 17:08:46.702019   38726 command_runner.go:130] > #   in $PATH.
	I0416 17:08:46.702033   38726 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0416 17:08:46.702045   38726 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0416 17:08:46.702062   38726 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0416 17:08:46.702070   38726 command_runner.go:130] > #   state.
	I0416 17:08:46.702080   38726 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0416 17:08:46.702090   38726 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0416 17:08:46.702102   38726 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0416 17:08:46.702114   38726 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0416 17:08:46.702133   38726 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0416 17:08:46.702146   38726 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0416 17:08:46.702160   38726 command_runner.go:130] > #   The currently recognized values are:
	I0416 17:08:46.702173   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0416 17:08:46.702183   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0416 17:08:46.702196   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0416 17:08:46.702209   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0416 17:08:46.702224   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0416 17:08:46.702237   38726 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0416 17:08:46.702250   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0416 17:08:46.702261   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0416 17:08:46.702271   38726 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0416 17:08:46.702284   38726 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0416 17:08:46.702295   38726 command_runner.go:130] > #   deprecated option "conmon".
	I0416 17:08:46.702308   38726 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0416 17:08:46.702319   38726 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0416 17:08:46.702334   38726 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0416 17:08:46.702343   38726 command_runner.go:130] > #   should be moved to the container's cgroup
	I0416 17:08:46.702352   38726 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0416 17:08:46.702363   38726 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0416 17:08:46.702377   38726 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0416 17:08:46.702388   38726 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0416 17:08:46.702396   38726 command_runner.go:130] > #
	I0416 17:08:46.702407   38726 command_runner.go:130] > # Using the seccomp notifier feature:
	I0416 17:08:46.702416   38726 command_runner.go:130] > #
	I0416 17:08:46.702426   38726 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0416 17:08:46.702436   38726 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0416 17:08:46.702443   38726 command_runner.go:130] > #
	I0416 17:08:46.702455   38726 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0416 17:08:46.702469   38726 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0416 17:08:46.702477   38726 command_runner.go:130] > #
	I0416 17:08:46.702491   38726 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0416 17:08:46.702500   38726 command_runner.go:130] > # feature.
	I0416 17:08:46.702504   38726 command_runner.go:130] > #
	I0416 17:08:46.702516   38726 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0416 17:08:46.702524   38726 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0416 17:08:46.702536   38726 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0416 17:08:46.702549   38726 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0416 17:08:46.702562   38726 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0416 17:08:46.702571   38726 command_runner.go:130] > #
	I0416 17:08:46.702581   38726 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0416 17:08:46.702593   38726 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0416 17:08:46.702598   38726 command_runner.go:130] > #
	I0416 17:08:46.702607   38726 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0416 17:08:46.702613   38726 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0416 17:08:46.702621   38726 command_runner.go:130] > #
	I0416 17:08:46.702639   38726 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0416 17:08:46.702652   38726 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0416 17:08:46.702660   38726 command_runner.go:130] > # limitation.
	I0416 17:08:46.702670   38726 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0416 17:08:46.702680   38726 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0416 17:08:46.702687   38726 command_runner.go:130] > runtime_type = "oci"
	I0416 17:08:46.702696   38726 command_runner.go:130] > runtime_root = "/run/runc"
	I0416 17:08:46.702704   38726 command_runner.go:130] > runtime_config_path = ""
	I0416 17:08:46.702715   38726 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0416 17:08:46.702726   38726 command_runner.go:130] > monitor_cgroup = "pod"
	I0416 17:08:46.702736   38726 command_runner.go:130] > monitor_exec_cgroup = ""
	I0416 17:08:46.702745   38726 command_runner.go:130] > monitor_env = [
	I0416 17:08:46.702756   38726 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 17:08:46.702764   38726 command_runner.go:130] > ]
	I0416 17:08:46.702772   38726 command_runner.go:130] > privileged_without_host_devices = false
	I0416 17:08:46.702782   38726 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0416 17:08:46.702792   38726 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0416 17:08:46.702805   38726 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0416 17:08:46.702820   38726 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0416 17:08:46.702835   38726 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0416 17:08:46.702847   38726 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0416 17:08:46.702865   38726 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0416 17:08:46.702876   38726 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0416 17:08:46.702885   38726 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0416 17:08:46.702891   38726 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0416 17:08:46.702899   38726 command_runner.go:130] > # Example:
	I0416 17:08:46.702907   38726 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0416 17:08:46.702920   38726 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0416 17:08:46.702930   38726 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0416 17:08:46.702938   38726 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0416 17:08:46.702947   38726 command_runner.go:130] > # cpuset = 0
	I0416 17:08:46.702955   38726 command_runner.go:130] > # cpushares = "0-1"
	I0416 17:08:46.702961   38726 command_runner.go:130] > # Where:
	I0416 17:08:46.702971   38726 command_runner.go:130] > # The workload name is workload-type.
	I0416 17:08:46.702981   38726 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0416 17:08:46.702988   38726 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0416 17:08:46.702995   38726 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0416 17:08:46.703003   38726 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0416 17:08:46.703011   38726 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0416 17:08:46.703018   38726 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0416 17:08:46.703025   38726 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0416 17:08:46.703032   38726 command_runner.go:130] > # Default value is set to true
	I0416 17:08:46.703040   38726 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0416 17:08:46.703052   38726 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0416 17:08:46.703063   38726 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0416 17:08:46.703074   38726 command_runner.go:130] > # Default value is set to 'false'
	I0416 17:08:46.703084   38726 command_runner.go:130] > # disable_hostport_mapping = false
	I0416 17:08:46.703097   38726 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0416 17:08:46.703105   38726 command_runner.go:130] > #
	I0416 17:08:46.703116   38726 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0416 17:08:46.703129   38726 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0416 17:08:46.703135   38726 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0416 17:08:46.703141   38726 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0416 17:08:46.703146   38726 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0416 17:08:46.703149   38726 command_runner.go:130] > [crio.image]
	I0416 17:08:46.703155   38726 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0416 17:08:46.703159   38726 command_runner.go:130] > # default_transport = "docker://"
	I0416 17:08:46.703167   38726 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0416 17:08:46.703172   38726 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0416 17:08:46.703175   38726 command_runner.go:130] > # global_auth_file = ""
	I0416 17:08:46.703180   38726 command_runner.go:130] > # The image used to instantiate infra containers.
	I0416 17:08:46.703184   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.703189   38726 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0416 17:08:46.703198   38726 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0416 17:08:46.703207   38726 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0416 17:08:46.703214   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.703221   38726 command_runner.go:130] > # pause_image_auth_file = ""
	I0416 17:08:46.703231   38726 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0416 17:08:46.703240   38726 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0416 17:08:46.703249   38726 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0416 17:08:46.703259   38726 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0416 17:08:46.703266   38726 command_runner.go:130] > # pause_command = "/pause"
	I0416 17:08:46.703274   38726 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0416 17:08:46.703283   38726 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0416 17:08:46.703292   38726 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0416 17:08:46.703301   38726 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0416 17:08:46.703309   38726 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0416 17:08:46.703318   38726 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0416 17:08:46.703330   38726 command_runner.go:130] > # pinned_images = [
	I0416 17:08:46.703336   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703342   38726 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0416 17:08:46.703348   38726 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0416 17:08:46.703356   38726 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0416 17:08:46.703367   38726 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0416 17:08:46.703375   38726 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0416 17:08:46.703382   38726 command_runner.go:130] > # signature_policy = ""
	I0416 17:08:46.703387   38726 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0416 17:08:46.703395   38726 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0416 17:08:46.703403   38726 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0416 17:08:46.703411   38726 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0416 17:08:46.703420   38726 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0416 17:08:46.703425   38726 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0416 17:08:46.703436   38726 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0416 17:08:46.703444   38726 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0416 17:08:46.703451   38726 command_runner.go:130] > # changing them here.
	I0416 17:08:46.703454   38726 command_runner.go:130] > # insecure_registries = [
	I0416 17:08:46.703458   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703466   38726 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0416 17:08:46.703474   38726 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0416 17:08:46.703478   38726 command_runner.go:130] > # image_volumes = "mkdir"
	I0416 17:08:46.703487   38726 command_runner.go:130] > # Temporary directory to use for storing big files
	I0416 17:08:46.703493   38726 command_runner.go:130] > # big_files_temporary_dir = ""
	I0416 17:08:46.703499   38726 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0416 17:08:46.703505   38726 command_runner.go:130] > # CNI plugins.
	I0416 17:08:46.703508   38726 command_runner.go:130] > [crio.network]
	I0416 17:08:46.703516   38726 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0416 17:08:46.703524   38726 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0416 17:08:46.703532   38726 command_runner.go:130] > # cni_default_network = ""
	I0416 17:08:46.703537   38726 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0416 17:08:46.703543   38726 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0416 17:08:46.703549   38726 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0416 17:08:46.703555   38726 command_runner.go:130] > # plugin_dirs = [
	I0416 17:08:46.703559   38726 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0416 17:08:46.703564   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703571   38726 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0416 17:08:46.703577   38726 command_runner.go:130] > [crio.metrics]
	I0416 17:08:46.703583   38726 command_runner.go:130] > # Globally enable or disable metrics support.
	I0416 17:08:46.703590   38726 command_runner.go:130] > enable_metrics = true
	I0416 17:08:46.703594   38726 command_runner.go:130] > # Specify enabled metrics collectors.
	I0416 17:08:46.703601   38726 command_runner.go:130] > # Per default all metrics are enabled.
	I0416 17:08:46.703609   38726 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0416 17:08:46.703617   38726 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0416 17:08:46.703625   38726 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0416 17:08:46.703629   38726 command_runner.go:130] > # metrics_collectors = [
	I0416 17:08:46.703635   38726 command_runner.go:130] > # 	"operations",
	I0416 17:08:46.703640   38726 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0416 17:08:46.703646   38726 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0416 17:08:46.703650   38726 command_runner.go:130] > # 	"operations_errors",
	I0416 17:08:46.703655   38726 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0416 17:08:46.703659   38726 command_runner.go:130] > # 	"image_pulls_by_name",
	I0416 17:08:46.703665   38726 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0416 17:08:46.703670   38726 command_runner.go:130] > # 	"image_pulls_failures",
	I0416 17:08:46.703676   38726 command_runner.go:130] > # 	"image_pulls_successes",
	I0416 17:08:46.703680   38726 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0416 17:08:46.703684   38726 command_runner.go:130] > # 	"image_layer_reuse",
	I0416 17:08:46.703689   38726 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0416 17:08:46.703697   38726 command_runner.go:130] > # 	"containers_oom_total",
	I0416 17:08:46.703703   38726 command_runner.go:130] > # 	"containers_oom",
	I0416 17:08:46.703707   38726 command_runner.go:130] > # 	"processes_defunct",
	I0416 17:08:46.703713   38726 command_runner.go:130] > # 	"operations_total",
	I0416 17:08:46.703717   38726 command_runner.go:130] > # 	"operations_latency_seconds",
	I0416 17:08:46.703724   38726 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0416 17:08:46.703728   38726 command_runner.go:130] > # 	"operations_errors_total",
	I0416 17:08:46.703734   38726 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0416 17:08:46.703740   38726 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0416 17:08:46.703747   38726 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0416 17:08:46.703751   38726 command_runner.go:130] > # 	"image_pulls_success_total",
	I0416 17:08:46.703758   38726 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0416 17:08:46.703762   38726 command_runner.go:130] > # 	"containers_oom_count_total",
	I0416 17:08:46.703768   38726 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0416 17:08:46.703773   38726 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0416 17:08:46.703779   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703784   38726 command_runner.go:130] > # The port on which the metrics server will listen.
	I0416 17:08:46.703791   38726 command_runner.go:130] > # metrics_port = 9090
	I0416 17:08:46.703796   38726 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0416 17:08:46.703802   38726 command_runner.go:130] > # metrics_socket = ""
	I0416 17:08:46.703807   38726 command_runner.go:130] > # The certificate for the secure metrics server.
	I0416 17:08:46.703815   38726 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0416 17:08:46.703821   38726 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0416 17:08:46.703839   38726 command_runner.go:130] > # certificate on any modification event.
	I0416 17:08:46.703844   38726 command_runner.go:130] > # metrics_cert = ""
	I0416 17:08:46.703848   38726 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0416 17:08:46.703854   38726 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0416 17:08:46.703858   38726 command_runner.go:130] > # metrics_key = ""
	I0416 17:08:46.703865   38726 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0416 17:08:46.703869   38726 command_runner.go:130] > [crio.tracing]
	I0416 17:08:46.703875   38726 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0416 17:08:46.703879   38726 command_runner.go:130] > # enable_tracing = false
	I0416 17:08:46.703887   38726 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0416 17:08:46.703891   38726 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0416 17:08:46.703897   38726 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0416 17:08:46.703904   38726 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0416 17:08:46.703908   38726 command_runner.go:130] > # CRI-O NRI configuration.
	I0416 17:08:46.703913   38726 command_runner.go:130] > [crio.nri]
	I0416 17:08:46.703917   38726 command_runner.go:130] > # Globally enable or disable NRI.
	I0416 17:08:46.703921   38726 command_runner.go:130] > # enable_nri = false
	I0416 17:08:46.703925   38726 command_runner.go:130] > # NRI socket to listen on.
	I0416 17:08:46.703929   38726 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0416 17:08:46.703935   38726 command_runner.go:130] > # NRI plugin directory to use.
	I0416 17:08:46.703940   38726 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0416 17:08:46.703947   38726 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0416 17:08:46.703954   38726 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0416 17:08:46.703959   38726 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0416 17:08:46.703965   38726 command_runner.go:130] > # nri_disable_connections = false
	I0416 17:08:46.703970   38726 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0416 17:08:46.703978   38726 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0416 17:08:46.703984   38726 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0416 17:08:46.703990   38726 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0416 17:08:46.703996   38726 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0416 17:08:46.704002   38726 command_runner.go:130] > [crio.stats]
	I0416 17:08:46.704007   38726 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0416 17:08:46.704014   38726 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0416 17:08:46.704022   38726 command_runner.go:130] > # stats_collection_period = 0
	I0416 17:08:46.704046   38726 command_runner.go:130] ! time="2024-04-16 17:08:46.673009078Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0416 17:08:46.704064   38726 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0416 17:08:46.704203   38726 cni.go:84] Creating CNI manager for ""
	I0416 17:08:46.704217   38726 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 17:08:46.704225   38726 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:08:46.704248   38726 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-334221 NodeName:multinode-334221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:08:46.704364   38726 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-334221"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:08:46.704419   38726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:08:46.715323   38726 command_runner.go:130] > kubeadm
	I0416 17:08:46.715343   38726 command_runner.go:130] > kubectl
	I0416 17:08:46.715346   38726 command_runner.go:130] > kubelet
	I0416 17:08:46.715366   38726 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:08:46.715405   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:08:46.725541   38726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 17:08:46.744216   38726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:08:46.762699   38726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0416 17:08:46.781008   38726 ssh_runner.go:195] Run: grep 192.168.39.137	control-plane.minikube.internal$ /etc/hosts
	I0416 17:08:46.785249   38726 command_runner.go:130] > 192.168.39.137	control-plane.minikube.internal
	I0416 17:08:46.785432   38726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:08:46.923127   38726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:08:46.938790   38726 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221 for IP: 192.168.39.137
	I0416 17:08:46.938813   38726 certs.go:194] generating shared ca certs ...
	I0416 17:08:46.938829   38726 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:08:46.938960   38726 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:08:46.939041   38726 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:08:46.939053   38726 certs.go:256] generating profile certs ...
	I0416 17:08:46.939144   38726 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/client.key
	I0416 17:08:46.939212   38726 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.key.2ea9189c
	I0416 17:08:46.939251   38726 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.key
	I0416 17:08:46.939262   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 17:08:46.939282   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 17:08:46.939300   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 17:08:46.939316   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 17:08:46.939332   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 17:08:46.939350   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 17:08:46.939363   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 17:08:46.939381   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 17:08:46.939446   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:08:46.939487   38726 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:08:46.939501   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:08:46.939532   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:08:46.939560   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:08:46.939595   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:08:46.939646   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:08:46.939690   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:46.939708   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 17:08:46.939723   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 17:08:46.940568   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:08:46.969524   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:08:46.996252   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:08:47.024358   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:08:47.051335   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:08:47.079710   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:08:47.107996   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:08:47.138956   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:08:47.166837   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:08:47.193402   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:08:47.220791   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:08:47.247473   38726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:08:47.266129   38726 ssh_runner.go:195] Run: openssl version
	I0416 17:08:47.272672   38726 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 17:08:47.272754   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:08:47.284439   38726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.289392   38726 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.289593   38726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.289642   38726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.295487   38726 command_runner.go:130] > b5213941
	I0416 17:08:47.295721   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:08:47.305631   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:08:47.317239   38726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.322122   38726 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.322228   38726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.322265   38726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.328413   38726 command_runner.go:130] > 51391683
	I0416 17:08:47.328462   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:08:47.338303   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:08:47.349721   38726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.354883   38726 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.354908   38726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.354944   38726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.361146   38726 command_runner.go:130] > 3ec20f2e
	I0416 17:08:47.361192   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:08:47.371024   38726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:08:47.375921   38726 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:08:47.375948   38726 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0416 17:08:47.375956   38726 command_runner.go:130] > Device: 253,1	Inode: 9433606     Links: 1
	I0416 17:08:47.375966   38726 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:08:47.375982   38726 command_runner.go:130] > Access: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.375991   38726 command_runner.go:130] > Modify: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.375998   38726 command_runner.go:130] > Change: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.376006   38726 command_runner.go:130] >  Birth: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.376053   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:08:47.381971   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.382173   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:08:47.387951   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.388220   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:08:47.393801   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.394101   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:08:47.400147   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.400201   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:08:47.405949   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.406222   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:08:47.412737   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.412802   38726 kubeadm.go:391] StartCluster: {Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:08:47.412963   38726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:08:47.413031   38726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:08:47.457672   38726 command_runner.go:130] > 90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1
	I0416 17:08:47.457708   38726 command_runner.go:130] > ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059
	I0416 17:08:47.457714   38726 command_runner.go:130] > 8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85
	I0416 17:08:47.457720   38726 command_runner.go:130] > 1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c
	I0416 17:08:47.457726   38726 command_runner.go:130] > 2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a
	I0416 17:08:47.457732   38726 command_runner.go:130] > 842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2
	I0416 17:08:47.457737   38726 command_runner.go:130] > 37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93
	I0416 17:08:47.457762   38726 command_runner.go:130] > dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c
	I0416 17:08:47.457785   38726 cri.go:89] found id: "90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1"
	I0416 17:08:47.457794   38726 cri.go:89] found id: "ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059"
	I0416 17:08:47.457797   38726 cri.go:89] found id: "8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85"
	I0416 17:08:47.457800   38726 cri.go:89] found id: "1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c"
	I0416 17:08:47.457802   38726 cri.go:89] found id: "2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a"
	I0416 17:08:47.457805   38726 cri.go:89] found id: "842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2"
	I0416 17:08:47.457808   38726 cri.go:89] found id: "37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93"
	I0416 17:08:47.457813   38726 cri.go:89] found id: "dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c"
	I0416 17:08:47.457816   38726 cri.go:89] found id: ""
	I0416 17:08:47.457852   38726 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.478386341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713287415478359009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64c6459e-d8db-4ac2-86f4-afd936967434 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.479350273Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57f1e292-0f08-4782-99be-816c20d0180b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.479406463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57f1e292-0f08-4782-99be-816c20d0180b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.479822187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,PodSandboxId:25edee42625e333cda08a390c25df64a49bcb34dae2df5570a3472bc0d201242,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713287368678061445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,PodSandboxId:e728a4a327b666dc29fc5594bbd940db37ca0ee807385463632f441f1644812c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713287335195483143,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,PodSandboxId:02cb3b0ad34d5948c309b23d1568320f0b0a840ac0bec7e1659783c09fe1a11a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713287335154843422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,PodSandboxId:8da2712a9cc645c05d13c0b01b6105b5019efd9ccd6e2397b4df4f8c4c724eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713287335009106481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6-780165e0a570,},Annotations:map[string]
string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,PodSandboxId:25f600fe2fc457c62ebac85541058159428263f98e8664e1337e781b7938b8e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713287334926496273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,PodSandboxId:ca6aae62358e1fbf35d548b4673c38e2c64a5beea08f2055788bb10730f29d45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713287330116705571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,PodSandboxId:be38d4a6f7e96e5bca49fe4d4c6624519c46762b303626f52631021f70715131,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713287330176032126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,PodSandboxId:705fc0c156c3d25216b3b420ed3731560deeeb7e4f1c9c4e11e2000818d86d9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713287330025886701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string{io.kubernetes.container.hash: 5dde1468,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,PodSandboxId:3dc43f19b5247d0f42c95d4caece46d83f37680361cb0f366594ae7a9799929f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713287330040042723,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.container.hash: 683921d8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff6e20af81510222826d1f3ec91344fa5bf553f74f3af5217b80c032e66de9a,PodSandboxId:69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713287025336379052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1,PodSandboxId:792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713286979309245057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.kubernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059,PodSandboxId:e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713286979289160574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85,PodSandboxId:0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713286977332855192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c,PodSandboxId:1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713286977189160958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6
-780165e0a570,},Annotations:map[string]string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a,PodSandboxId:a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713286957910413253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string
{io.kubernetes.container.hash: 5dde1468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2,PodSandboxId:03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713286957883794942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.
container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c,PodSandboxId:258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713286957764848080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93,PodSandboxId:1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713286957790244089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 683921d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57f1e292-0f08-4782-99be-816c20d0180b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.526544426Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0aafdf1d-daa0-49c7-ac72-d7d592de732e name=/runtime.v1.RuntimeService/Version
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.526615634Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0aafdf1d-daa0-49c7-ac72-d7d592de732e name=/runtime.v1.RuntimeService/Version
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.528041703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1dfc5ac-426d-4bba-848b-43097f16e5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.528396970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713287415528377209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1dfc5ac-426d-4bba-848b-43097f16e5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.528931963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=053faaec-8552-4e2e-bfc4-586e90469e65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.529071971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=053faaec-8552-4e2e-bfc4-586e90469e65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.529395698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,PodSandboxId:25edee42625e333cda08a390c25df64a49bcb34dae2df5570a3472bc0d201242,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713287368678061445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,PodSandboxId:e728a4a327b666dc29fc5594bbd940db37ca0ee807385463632f441f1644812c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713287335195483143,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,PodSandboxId:02cb3b0ad34d5948c309b23d1568320f0b0a840ac0bec7e1659783c09fe1a11a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713287335154843422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,PodSandboxId:8da2712a9cc645c05d13c0b01b6105b5019efd9ccd6e2397b4df4f8c4c724eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713287335009106481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6-780165e0a570,},Annotations:map[string]
string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,PodSandboxId:25f600fe2fc457c62ebac85541058159428263f98e8664e1337e781b7938b8e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713287334926496273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,PodSandboxId:ca6aae62358e1fbf35d548b4673c38e2c64a5beea08f2055788bb10730f29d45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713287330116705571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,PodSandboxId:be38d4a6f7e96e5bca49fe4d4c6624519c46762b303626f52631021f70715131,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713287330176032126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,PodSandboxId:705fc0c156c3d25216b3b420ed3731560deeeb7e4f1c9c4e11e2000818d86d9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713287330025886701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string{io.kubernetes.container.hash: 5dde1468,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,PodSandboxId:3dc43f19b5247d0f42c95d4caece46d83f37680361cb0f366594ae7a9799929f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713287330040042723,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.container.hash: 683921d8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff6e20af81510222826d1f3ec91344fa5bf553f74f3af5217b80c032e66de9a,PodSandboxId:69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713287025336379052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1,PodSandboxId:792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713286979309245057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.kubernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059,PodSandboxId:e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713286979289160574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85,PodSandboxId:0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713286977332855192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c,PodSandboxId:1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713286977189160958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6
-780165e0a570,},Annotations:map[string]string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a,PodSandboxId:a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713286957910413253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string
{io.kubernetes.container.hash: 5dde1468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2,PodSandboxId:03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713286957883794942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.
container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c,PodSandboxId:258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713286957764848080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93,PodSandboxId:1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713286957790244089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 683921d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=053faaec-8552-4e2e-bfc4-586e90469e65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.575141155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe5bc1ee-1882-4cc9-b821-ba2c5690231d name=/runtime.v1.RuntimeService/Version
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.575211922Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe5bc1ee-1882-4cc9-b821-ba2c5690231d name=/runtime.v1.RuntimeService/Version
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.576861838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=614951b2-50b9-4946-97a3-85c2b6004792 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.577305958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713287415577282921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=614951b2-50b9-4946-97a3-85c2b6004792 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.578172531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50199587-4995-4610-ba61-978edb24f032 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.578227418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50199587-4995-4610-ba61-978edb24f032 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.578687981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,PodSandboxId:25edee42625e333cda08a390c25df64a49bcb34dae2df5570a3472bc0d201242,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713287368678061445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,PodSandboxId:e728a4a327b666dc29fc5594bbd940db37ca0ee807385463632f441f1644812c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713287335195483143,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,PodSandboxId:02cb3b0ad34d5948c309b23d1568320f0b0a840ac0bec7e1659783c09fe1a11a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713287335154843422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,PodSandboxId:8da2712a9cc645c05d13c0b01b6105b5019efd9ccd6e2397b4df4f8c4c724eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713287335009106481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6-780165e0a570,},Annotations:map[string]
string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,PodSandboxId:25f600fe2fc457c62ebac85541058159428263f98e8664e1337e781b7938b8e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713287334926496273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,PodSandboxId:ca6aae62358e1fbf35d548b4673c38e2c64a5beea08f2055788bb10730f29d45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713287330116705571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,PodSandboxId:be38d4a6f7e96e5bca49fe4d4c6624519c46762b303626f52631021f70715131,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713287330176032126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,PodSandboxId:705fc0c156c3d25216b3b420ed3731560deeeb7e4f1c9c4e11e2000818d86d9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713287330025886701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string{io.kubernetes.container.hash: 5dde1468,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,PodSandboxId:3dc43f19b5247d0f42c95d4caece46d83f37680361cb0f366594ae7a9799929f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713287330040042723,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.container.hash: 683921d8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff6e20af81510222826d1f3ec91344fa5bf553f74f3af5217b80c032e66de9a,PodSandboxId:69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713287025336379052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1,PodSandboxId:792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713286979309245057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.kubernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059,PodSandboxId:e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713286979289160574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85,PodSandboxId:0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713286977332855192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c,PodSandboxId:1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713286977189160958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6
-780165e0a570,},Annotations:map[string]string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a,PodSandboxId:a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713286957910413253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string
{io.kubernetes.container.hash: 5dde1468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2,PodSandboxId:03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713286957883794942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.
container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c,PodSandboxId:258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713286957764848080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93,PodSandboxId:1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713286957790244089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 683921d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50199587-4995-4610-ba61-978edb24f032 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.626681666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=561ce42f-6c3f-49e2-9a6e-84858fc3cec6 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.626758869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=561ce42f-6c3f-49e2-9a6e-84858fc3cec6 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.628353991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a430713-a0e2-4474-9f6c-03242b507f9a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.628743540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713287415628721458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a430713-a0e2-4474-9f6c-03242b507f9a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.629598206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8b03884-d3c8-45cf-afe2-a05c039ac3e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.629653836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8b03884-d3c8-45cf-afe2-a05c039ac3e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:10:15 multinode-334221 crio[2849]: time="2024-04-16 17:10:15.630094611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,PodSandboxId:25edee42625e333cda08a390c25df64a49bcb34dae2df5570a3472bc0d201242,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713287368678061445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,PodSandboxId:e728a4a327b666dc29fc5594bbd940db37ca0ee807385463632f441f1644812c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713287335195483143,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,PodSandboxId:02cb3b0ad34d5948c309b23d1568320f0b0a840ac0bec7e1659783c09fe1a11a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713287335154843422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,PodSandboxId:8da2712a9cc645c05d13c0b01b6105b5019efd9ccd6e2397b4df4f8c4c724eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713287335009106481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6-780165e0a570,},Annotations:map[string]
string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,PodSandboxId:25f600fe2fc457c62ebac85541058159428263f98e8664e1337e781b7938b8e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713287334926496273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,PodSandboxId:ca6aae62358e1fbf35d548b4673c38e2c64a5beea08f2055788bb10730f29d45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713287330116705571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,PodSandboxId:be38d4a6f7e96e5bca49fe4d4c6624519c46762b303626f52631021f70715131,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713287330176032126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,PodSandboxId:705fc0c156c3d25216b3b420ed3731560deeeb7e4f1c9c4e11e2000818d86d9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713287330025886701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string{io.kubernetes.container.hash: 5dde1468,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,PodSandboxId:3dc43f19b5247d0f42c95d4caece46d83f37680361cb0f366594ae7a9799929f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713287330040042723,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.container.hash: 683921d8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff6e20af81510222826d1f3ec91344fa5bf553f74f3af5217b80c032e66de9a,PodSandboxId:69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713287025336379052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1,PodSandboxId:792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713286979309245057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.kubernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059,PodSandboxId:e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713286979289160574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85,PodSandboxId:0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713286977332855192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c,PodSandboxId:1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713286977189160958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6
-780165e0a570,},Annotations:map[string]string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a,PodSandboxId:a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713286957910413253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string
{io.kubernetes.container.hash: 5dde1468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2,PodSandboxId:03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713286957883794942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.
container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c,PodSandboxId:258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713286957764848080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93,PodSandboxId:1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713286957790244089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 683921d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8b03884-d3c8-45cf-afe2-a05c039ac3e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	39bb74fcbecde       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      47 seconds ago       Running             busybox                   1                   25edee42625e3       busybox-7fdf7869d9-fn86w
	918a75a3ca792       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   e728a4a327b66       kindnet-fntnd
	0d96295ea9684       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   02cb3b0ad34d5       coredns-76f75df574-kmmn4
	ff8b0d5f33be5       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                1                   8da2712a9cc64       kube-proxy-jjc8v
	33b39d13c5d88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   25f600fe2fc45       storage-provisioner
	22266da17977f       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   1                   be38d4a6f7e96       kube-controller-manager-multinode-334221
	677a87ab6b202       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            1                   ca6aae62358e1       kube-scheduler-multinode-334221
	47be8c7e2330a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            1                   3dc43f19b5247       kube-apiserver-multinode-334221
	257ea8618977c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   705fc0c156c3d       etcd-multinode-334221
	6ff6e20af8151       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   69f4c1c5a6a7b       busybox-7fdf7869d9-fn86w
	90dcd274439a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   792dfcb8e32e6       storage-provisioner
	ec151ba6a42a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   e795d10063d9e       coredns-76f75df574-kmmn4
	8d106f52934dc       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   0b7212d0b852b       kindnet-fntnd
	1ad0500c2ca8e       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago        Exited              kube-proxy                0                   1daef1766a0ea       kube-proxy-jjc8v
	2a739b90a41d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   a403e706ad902       etcd-multinode-334221
	842b6569b6e08       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago        Exited              kube-scheduler            0                   03f5937495793       kube-scheduler-multinode-334221
	37623592e737d       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago        Exited              kube-apiserver            0                   1f07ad5930705       kube-apiserver-multinode-334221
	dffaed579f047       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago        Exited              kube-controller-manager   0                   258d7e84b6f54       kube-controller-manager-multinode-334221
	
	
	==> coredns [0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48533 - 2317 "HINFO IN 8745856005267822946.1325250241756429142. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.084486017s
	
	
	==> coredns [ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059] <==
	[INFO] 10.244.1.2:54242 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001686432s
	[INFO] 10.244.1.2:52356 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154252s
	[INFO] 10.244.1.2:53372 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135816s
	[INFO] 10.244.1.2:57180 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001264825s
	[INFO] 10.244.1.2:34404 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007883s
	[INFO] 10.244.1.2:35137 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158335s
	[INFO] 10.244.1.2:47370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098421s
	[INFO] 10.244.0.3:37968 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156843s
	[INFO] 10.244.0.3:39972 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092463s
	[INFO] 10.244.0.3:40885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076822s
	[INFO] 10.244.0.3:35714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071218s
	[INFO] 10.244.1.2:55015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187475s
	[INFO] 10.244.1.2:44135 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098668s
	[INFO] 10.244.1.2:44160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118175s
	[INFO] 10.244.1.2:53055 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152982s
	[INFO] 10.244.0.3:50792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109758s
	[INFO] 10.244.0.3:56375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215576s
	[INFO] 10.244.0.3:53832 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079611s
	[INFO] 10.244.0.3:58674 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099832s
	[INFO] 10.244.1.2:42759 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172484s
	[INFO] 10.244.1.2:32992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000273696s
	[INFO] 10.244.1.2:41132 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113035s
	[INFO] 10.244.1.2:55606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115315s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-334221
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334221
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-334221
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_02_44_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:02:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334221
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:10:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    multinode-334221
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5c158eac4584c888e6ef2b0e52007a0
	  System UUID:                c5c158ea-c458-4c88-8e6e-f2b0e52007a0
	  Boot ID:                    55202679-9eef-45ab-97dd-0197453c8d95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-fn86w                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 coredns-76f75df574-kmmn4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 etcd-multinode-334221                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m31s
	  kube-system                 kindnet-fntnd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m19s
	  kube-system                 kube-apiserver-multinode-334221             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-controller-manager-multinode-334221    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-proxy-jjc8v                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-scheduler-multinode-334221             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m18s                  kube-proxy       
	  Normal  Starting                 80s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  7m38s (x8 over 7m38s)  kubelet          Node multinode-334221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x8 over 7m38s)  kubelet          Node multinode-334221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s (x7 over 7m38s)  kubelet          Node multinode-334221 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m31s                  kubelet          Node multinode-334221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m31s                  kubelet          Node multinode-334221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m31s                  kubelet          Node multinode-334221 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m20s                  node-controller  Node multinode-334221 event: Registered Node multinode-334221 in Controller
	  Normal  NodeReady                7m17s                  kubelet          Node multinode-334221 status is now: NodeReady
	  Normal  Starting                 86s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s (x8 over 86s)      kubelet          Node multinode-334221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 86s)      kubelet          Node multinode-334221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 86s)      kubelet          Node multinode-334221 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node multinode-334221 event: Registered Node multinode-334221 in Controller
	
	
	Name:               multinode-334221-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334221-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-334221
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_09_37_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:09:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334221-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:10:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:09:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:09:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:09:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:09:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-334221-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0675febc0c45430ea1c6abae45425fcc
	  System UUID:                0675febc-0c45-430e-a1c6-abae45425fcc
	  Boot ID:                    a243c7c7-9811-4f3c-bee7-4fcaacac818f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-d5wzc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kindnet-xfr28               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m42s
	  kube-system                 kube-proxy-24lft            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 34s                    kube-proxy       
	  Normal  Starting                 6m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m42s)  kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m42s)  kubelet          Node multinode-334221-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m42s)  kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m34s                  kubelet          Node multinode-334221-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  39s (x2 over 39s)      kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x2 over 39s)      kubelet          Node multinode-334221-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x2 over 39s)      kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                    node-controller  Node multinode-334221-m02 event: Registered Node multinode-334221-m02 in Controller
	  Normal  NodeReady                31s                    kubelet          Node multinode-334221-m02 status is now: NodeReady
	
	
	Name:               multinode-334221-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334221-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-334221
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_10_06_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:10:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334221-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:10:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:10:12 +0000   Tue, 16 Apr 2024 17:10:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:10:12 +0000   Tue, 16 Apr 2024 17:10:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:10:12 +0000   Tue, 16 Apr 2024 17:10:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:10:12 +0000   Tue, 16 Apr 2024 17:10:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    multinode-334221-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ec508fc59404f7f8721e7c259433705
	  System UUID:                0ec508fc-5940-4f7f-8721-e7c259433705
	  Boot ID:                    fedfd60e-ab0e-4b45-939a-7e6b81735b66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2q8wk       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-proxy-xtm5h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m51s                  kube-proxy  
	  Normal  Starting                 6s                     kube-proxy  
	  Normal  Starting                 5m11s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    5m57s (x2 over 5m57s)  kubelet     Node multinode-334221-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x2 over 5m57s)  kubelet     Node multinode-334221-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m57s (x2 over 5m57s)  kubelet     Node multinode-334221-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m48s                  kubelet     Node multinode-334221-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m15s (x2 over 5m16s)  kubelet     Node multinode-334221-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m15s (x2 over 5m16s)  kubelet     Node multinode-334221-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m15s (x2 over 5m16s)  kubelet     Node multinode-334221-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m8s                   kubelet     Node multinode-334221-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  11s (x2 over 11s)      kubelet     Node multinode-334221-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x2 over 11s)      kubelet     Node multinode-334221-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x2 over 11s)      kubelet     Node multinode-334221-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-334221-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060370] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071560] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.176929] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140075] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.288128] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.976125] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.067093] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.357730] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.720073] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.587857] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.092944] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.702743] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[  +0.153033] kauditd_printk_skb: 21 callbacks suppressed
	[Apr16 17:03] kauditd_printk_skb: 82 callbacks suppressed
	[Apr16 17:08] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.169198] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +0.176428] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.158939] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.294153] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.741104] systemd-fstab-generator[2937]: Ignoring "noauto" option for root device
	[  +2.238997] systemd-fstab-generator[3064]: Ignoring "noauto" option for root device
	[  +5.731424] kauditd_printk_skb: 184 callbacks suppressed
	[Apr16 17:09] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.444050] systemd-fstab-generator[3882]: Ignoring "noauto" option for root device
	[ +17.875973] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8] <==
	{"level":"info","ts":"2024-04-16T17:08:50.572383Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5527995f6263874a","initial-advertise-peer-urls":["https://192.168.39.137:2380"],"listen-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:08:50.57244Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:08:50.572549Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:08:50.572581Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:08:50.573669Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:08:50.573744Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:08:50.573755Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:08:50.57393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=(6136041652267222858)"}
	{"level":"info","ts":"2024-04-16T17:08:50.57414Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","added-peer-id":"5527995f6263874a","added-peer-peer-urls":["https://192.168.39.137:2380"]}
	{"level":"info","ts":"2024-04-16T17:08:50.574294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:08:50.574351Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:08:52.030733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T17:08:52.030806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:08:52.030848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 2"}
	{"level":"info","ts":"2024-04-16T17:08:52.030862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.030888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.030897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became leader at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.030908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5527995f6263874a elected leader 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.037189Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:08:52.038103Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5527995f6263874a","local-member-attributes":"{Name:multinode-334221 ClientURLs:[https://192.168.39.137:2379]}","request-path":"/0/members/5527995f6263874a/attributes","cluster-id":"8623b2a8b011233f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:08:52.038356Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:08:52.038653Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:08:52.038694Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:08:52.039293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.137:2379"}
	{"level":"info","ts":"2024-04-16T17:08:52.040475Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a] <==
	{"level":"info","ts":"2024-04-16T17:02:38.532033Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:02:38.532164Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:03:33.371202Z","caller":"traceutil/trace.go:171","msg":"trace[1176074096] linearizableReadLoop","detail":"{readStateIndex:492; appliedIndex:491; }","duration":"170.613839ms","start":"2024-04-16T17:03:33.200551Z","end":"2024-04-16T17:03:33.371165Z","steps":["trace[1176074096] 'read index received'  (duration: 166.533636ms)","trace[1176074096] 'applied index is now lower than readState.Index'  (duration: 4.079578ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:03:33.371916Z","caller":"traceutil/trace.go:171","msg":"trace[61202478] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"181.462307ms","start":"2024-04-16T17:03:33.190443Z","end":"2024-04-16T17:03:33.371905Z","steps":["trace[61202478] 'process raft request'  (duration: 176.737661ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:03:33.37212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.515336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-334221-m02.17c6d176062cc839\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2024-04-16T17:03:33.372883Z","caller":"traceutil/trace.go:171","msg":"trace[1179955265] range","detail":"{range_begin:/registry/events/default/multinode-334221-m02.17c6d176062cc839; range_end:; response_count:1; response_revision:472; }","duration":"172.306584ms","start":"2024-04-16T17:03:33.200527Z","end":"2024-04-16T17:03:33.372834Z","steps":["trace[1179955265] 'agreement among raft nodes before linearized reading'  (duration: 171.466761ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:03:35.885061Z","caller":"traceutil/trace.go:171","msg":"trace[1235357056] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:520; }","duration":"156.41988ms","start":"2024-04-16T17:03:35.728623Z","end":"2024-04-16T17:03:35.885043Z","steps":["trace[1235357056] 'read index received'  (duration: 142.917864ms)","trace[1235357056] 'applied index is now lower than readState.Index'  (duration: 13.501125ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:03:35.885217Z","caller":"traceutil/trace.go:171","msg":"trace[140332580] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"171.690877ms","start":"2024-04-16T17:03:35.713518Z","end":"2024-04-16T17:03:35.885209Z","steps":["trace[140332580] 'process raft request'  (duration: 158.070586ms)","trace[140332580] 'compare'  (duration: 13.2934ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:03:35.885352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.716267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-04-16T17:03:35.88541Z","caller":"traceutil/trace.go:171","msg":"trace[2098667624] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:500; }","duration":"156.802396ms","start":"2024-04-16T17:03:35.7286Z","end":"2024-04-16T17:03:35.885403Z","steps":["trace[2098667624] 'agreement among raft nodes before linearized reading'  (duration: 156.713584ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:03:35.885437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.319487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334221-m02\" ","response":"range_response_count:1 size:2959"}
	{"level":"info","ts":"2024-04-16T17:03:35.885504Z","caller":"traceutil/trace.go:171","msg":"trace[1138375139] range","detail":"{range_begin:/registry/minions/multinode-334221-m02; range_end:; response_count:1; response_revision:500; }","duration":"139.408533ms","start":"2024-04-16T17:03:35.746086Z","end":"2024-04-16T17:03:35.885494Z","steps":["trace[1138375139] 'agreement among raft nodes before linearized reading'  (duration: 139.32103ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:04:18.898486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.070567ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9748761469881449988 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-334221-m03.17c6d1809f6d8af6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-334221-m03.17c6d1809f6d8af6\" value_size:642 lease:525389433026673887 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-16T17:04:18.898753Z","caller":"traceutil/trace.go:171","msg":"trace[1195658590] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"238.56281ms","start":"2024-04-16T17:04:18.660163Z","end":"2024-04-16T17:04:18.898725Z","steps":["trace[1195658590] 'process raft request'  (duration: 121.329744ms)","trace[1195658590] 'compare'  (duration: 115.739674ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:04:18.899667Z","caller":"traceutil/trace.go:171","msg":"trace[1245388816] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"173.130052ms","start":"2024-04-16T17:04:18.726524Z","end":"2024-04-16T17:04:18.899654Z","steps":["trace[1245388816] 'process raft request'  (duration: 172.700374ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:07:13.808177Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-16T17:07:13.808318Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-334221","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	{"level":"warn","ts":"2024-04-16T17:07:13.808457Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T17:07:13.808613Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T17:07:13.902665Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T17:07:13.90273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T17:07:13.902806Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5527995f6263874a","current-leader-member-id":"5527995f6263874a"}
	{"level":"info","ts":"2024-04-16T17:07:13.905422Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:07:13.905566Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:07:13.905578Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-334221","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	
	
	==> kernel <==
	 17:10:16 up 8 min,  0 users,  load average: 0.54, 0.54, 0.27
	Linux multinode-334221 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85] <==
	I0416 17:06:28.520150       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:06:38.529845       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:06:38.529896       1 main.go:227] handling current node
	I0416 17:06:38.529907       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:06:38.529913       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:06:38.530078       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:06:38.530110       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:06:48.537345       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:06:48.537449       1 main.go:227] handling current node
	I0416 17:06:48.537595       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:06:48.537627       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:06:48.537794       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:06:48.537831       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:06:58.552061       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:06:58.552207       1 main.go:227] handling current node
	I0416 17:06:58.552242       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:06:58.552263       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:06:58.552415       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:06:58.552436       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:07:08.563223       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:07:08.563382       1 main.go:227] handling current node
	I0416 17:07:08.563406       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:07:08.563425       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:07:08.563549       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:07:08.563573       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c] <==
	I0416 17:09:26.217121       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:09:26.217287       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:09:26.217328       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:09:36.223927       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:09:36.224401       1 main.go:227] handling current node
	I0416 17:09:36.224437       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:09:36.224471       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:09:46.237681       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:09:46.237732       1 main.go:227] handling current node
	I0416 17:09:46.237743       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:09:46.237755       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:09:46.237874       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:09:46.237917       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:09:56.252840       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:09:56.253091       1 main.go:227] handling current node
	I0416 17:09:56.253143       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:09:56.253176       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:09:56.253377       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:09:56.253431       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:10:06.266568       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:10:06.266840       1 main.go:227] handling current node
	I0416 17:10:06.266923       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:10:06.267128       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:10:06.267645       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:10:06.267883       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93] <==
	I0416 17:02:41.496352       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 17:02:41.496473       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:02:42.175811       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:02:42.250450       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:02:42.297812       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 17:02:42.304885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137]
	I0416 17:02:42.306185       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:02:42.311415       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:02:42.569396       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:02:43.916560       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:02:43.948692       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 17:02:43.968018       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:02:56.263061       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 17:02:56.619165       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	W0416 17:07:13.810762       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831716       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831830       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831874       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831917       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.842260       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.843541       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.844370       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0416 17:07:13.848664       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0416 17:07:13.849103       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.852919       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b] <==
	I0416 17:08:53.401683       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 17:08:53.416563       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 17:08:53.416615       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 17:08:53.518221       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 17:08:53.530066       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:08:53.531226       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 17:08:53.531309       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:08:53.531333       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:08:53.531505       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:08:53.531536       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:08:53.531671       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:08:53.531713       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:08:53.563533       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:08:53.589635       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:08:53.598254       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 17:08:53.608216       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0416 17:08:53.639726       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0416 17:08:54.404770       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:08:55.930375       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:08:56.060547       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:08:56.073794       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:08:56.145628       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:08:56.152339       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:09:06.275453       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:09:06.312234       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc] <==
	I0416 17:09:32.268455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="10.423621ms"
	I0416 17:09:32.268816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="120.158µs"
	I0416 17:09:36.368132       1 event.go:376] "Event occurred" object="multinode-334221-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-334221-m02 event: Removing Node multinode-334221-m02 from Controller"
	I0416 17:09:36.622917       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334221-m02\" does not exist"
	I0416 17:09:36.625122       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-tzz4s" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-tzz4s"
	I0416 17:09:36.635455       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-334221-m02" podCIDRs=["10.244.1.0/24"]
	I0416 17:09:37.231189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.85µs"
	I0416 17:09:38.534414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="73.199µs"
	I0416 17:09:38.546088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="59.086µs"
	I0416 17:09:38.557378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.645µs"
	I0416 17:09:38.594848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="90.356µs"
	I0416 17:09:38.600871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="41.781µs"
	I0416 17:09:38.608680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.6µs"
	I0416 17:09:41.368880       1 event.go:376] "Event occurred" object="multinode-334221-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-334221-m02 event: Registered Node multinode-334221-m02 in Controller"
	I0416 17:09:44.237877       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:09:44.270664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.633µs"
	I0416 17:09:44.285128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.281µs"
	I0416 17:09:46.024386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="9.737465ms"
	I0416 17:09:46.025228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.962µs"
	I0416 17:09:46.381063       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-d5wzc" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-d5wzc"
	I0416 17:10:03.736456       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:10:04.981580       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334221-m03\" does not exist"
	I0416 17:10:04.982035       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:10:04.998756       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-334221-m03" podCIDRs=["10.244.2.0/24"]
	I0416 17:10:12.483125       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m03"
	
	
	==> kube-controller-manager [dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c] <==
	I0416 17:03:46.461844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="15.202478ms"
	I0416 17:03:46.462114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.059µs"
	I0416 17:04:18.905550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:04:18.907819       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334221-m03\" does not exist"
	I0416 17:04:18.939319       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-334221-m03" podCIDRs=["10.244.2.0/24"]
	I0416 17:04:18.943604       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xtm5h"
	I0416 17:04:18.946321       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2q8wk"
	I0416 17:04:20.652160       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-334221-m03"
	I0416 17:04:20.652488       1 event.go:376] "Event occurred" object="multinode-334221-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-334221-m03 event: Registered Node multinode-334221-m03 in Controller"
	I0416 17:04:27.499657       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:04:58.984682       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:05:00.043769       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334221-m03\" does not exist"
	I0416 17:05:00.044541       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:05:00.055267       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-334221-m03" podCIDRs=["10.244.3.0/24"]
	I0416 17:05:07.243402       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:05:45.739189       1 event.go:376] "Event occurred" object="multinode-334221-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-334221-m02 status is now: NodeNotReady"
	I0416 17:05:45.740871       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m03"
	I0416 17:05:45.759169       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-24lft" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:45.771461       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-tzz4s" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:45.782505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.18897ms"
	I0416 17:05:45.783766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.486µs"
	I0416 17:05:45.788640       1 event.go:376] "Event occurred" object="kube-system/kindnet-xfr28" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:50.796098       1 event.go:376] "Event occurred" object="multinode-334221-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-334221-m03 status is now: NodeNotReady"
	I0416 17:05:50.810156       1 event.go:376] "Event occurred" object="kube-system/kindnet-2q8wk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:50.824108       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-xtm5h" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c] <==
	I0416 17:02:57.652233       1 server_others.go:72] "Using iptables proxy"
	I0416 17:02:57.674916       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	I0416 17:02:57.783086       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:02:57.783250       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:02:57.783572       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:02:57.794704       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:02:57.794936       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:02:57.795081       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:02:57.799851       1 config.go:188] "Starting service config controller"
	I0416 17:02:57.801438       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:02:57.801506       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:02:57.801531       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:02:57.800894       1 config.go:315] "Starting node config controller"
	I0416 17:02:57.801897       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:02:57.902071       1 shared_informer.go:318] Caches are synced for node config
	I0416 17:02:57.902150       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:02:57.902160       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008] <==
	I0416 17:08:55.257219       1 server_others.go:72] "Using iptables proxy"
	I0416 17:08:55.281513       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	I0416 17:08:55.413279       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:08:55.413303       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:08:55.413320       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:08:55.429202       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:08:55.429413       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:08:55.429424       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:08:55.436129       1 config.go:188] "Starting service config controller"
	I0416 17:08:55.437088       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:08:55.437244       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:08:55.437254       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:08:55.440895       1 config.go:315] "Starting node config controller"
	I0416 17:08:55.441900       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:08:55.538602       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:08:55.538661       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:08:55.542386       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5] <==
	I0416 17:08:51.274383       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:08:53.482069       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 17:08:53.482155       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:08:53.482183       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:08:53.482208       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:08:53.562403       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 17:08:53.562527       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:08:53.570658       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:08:53.570786       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:08:53.582273       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 17:08:53.582357       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:08:53.671248       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2] <==
	W0416 17:02:41.711674       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:02:41.711734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:02:41.717203       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:02:41.717765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:02:41.760254       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:02:41.760395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:02:41.769170       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:02:41.769294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:02:41.799219       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:02:41.799278       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:02:41.807095       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:02:41.807148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:02:41.817827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 17:02:41.817999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 17:02:41.829134       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:02:41.829246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:02:41.872818       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:02:41.872843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:02:41.882682       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:02:41.882735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0416 17:02:43.882425       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:07:13.835371       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 17:07:13.835469       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 17:07:13.835827       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 17:07:13.836249       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.315623    3071 topology_manager.go:215] "Topology Admit Handler" podUID="90fe0e05-fb6a-4fe3-8eb6-780165e0a570" podNamespace="kube-system" podName="kube-proxy-jjc8v"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.315789    3071 topology_manager.go:215] "Topology Admit Handler" podUID="5dd215e8-2408-4dd5-971e-984ba5364a2b" podNamespace="kube-system" podName="storage-provisioner"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.315912    3071 topology_manager.go:215] "Topology Admit Handler" podUID="bec786d6-f06c-401d-af63-69faa1ffcd84" podNamespace="default" podName="busybox-7fdf7869d9-fn86w"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.330663    3071 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.359143    3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1-lib-modules\") pod \"kindnet-fntnd\" (UID: \"8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1\") " pod="kube-system/kindnet-fntnd"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.359482    3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90fe0e05-fb6a-4fe3-8eb6-780165e0a570-xtables-lock\") pod \"kube-proxy-jjc8v\" (UID: \"90fe0e05-fb6a-4fe3-8eb6-780165e0a570\") " pod="kube-system/kube-proxy-jjc8v"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.360724    3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1-cni-cfg\") pod \"kindnet-fntnd\" (UID: \"8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1\") " pod="kube-system/kindnet-fntnd"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.360901    3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1-xtables-lock\") pod \"kindnet-fntnd\" (UID: \"8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1\") " pod="kube-system/kindnet-fntnd"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.361196    3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90fe0e05-fb6a-4fe3-8eb6-780165e0a570-lib-modules\") pod \"kube-proxy-jjc8v\" (UID: \"90fe0e05-fb6a-4fe3-8eb6-780165e0a570\") " pod="kube-system/kube-proxy-jjc8v"
	Apr 16 17:08:54 multinode-334221 kubelet[3071]: I0416 17:08:54.362551    3071 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5dd215e8-2408-4dd5-971e-984ba5364a2b-tmp\") pod \"storage-provisioner\" (UID: \"5dd215e8-2408-4dd5-971e-984ba5364a2b\") " pod="kube-system/storage-provisioner"
	Apr 16 17:08:59 multinode-334221 kubelet[3071]: I0416 17:08:59.259314    3071 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.392512    3071 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:09:49 multinode-334221 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:09:49 multinode-334221 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:09:49 multinode-334221 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:09:49 multinode-334221 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.426043    3071 manager.go:1116] Failed to create existing container: /kubepods/pod8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1/crio-0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd: Error finding container 0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd: Status 404 returned error can't find the container with id 0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.426450    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda7a5a5dc6e39c6c525ff7d9719f9ca00/crio-a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7: Error finding container a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7: Status 404 returned error can't find the container with id a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.426862    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod120c3e394989b4d3ebee3b461ba74f97/crio-03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8: Error finding container 03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8: Status 404 returned error can't find the container with id 03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.427547    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod5dd215e8-2408-4dd5-971e-984ba5364a2b/crio-792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003: Error finding container 792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003: Status 404 returned error can't find the container with id 792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.428021    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d052fc5203f79937ba06a7a4a172dee/crio-258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639: Error finding container 258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639: Status 404 returned error can't find the container with id 258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.428348    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod90fe0e05-fb6a-4fe3-8eb6-780165e0a570/crio-1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853: Error finding container 1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853: Status 404 returned error can't find the container with id 1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.428502    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod967de72eba21f1ee9f74d3a0d8fc1538/crio-1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f: Error finding container 1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f: Status 404 returned error can't find the container with id 1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.428765    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podbec786d6-f06c-401d-af63-69faa1ffcd84/crio-69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31: Error finding container 69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31: Status 404 returned error can't find the container with id 69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31
	Apr 16 17:09:49 multinode-334221 kubelet[3071]: E0416 17:09:49.429276    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/podde04df6b-6ad2-4417-94fd-1d8bb97b864a/crio-e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332: Error finding container e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332: Status 404 returned error can't find the container with id e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:10:15.111617   39873 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18649-3628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-334221 -n multinode-334221
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-334221 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (307.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 stop
E0416 17:12:03.892919   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:12:10.030179   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334221 stop: exit status 82 (2m0.486325004s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-334221-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-334221 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334221 status: exit status 3 (18.88264902s)

                                                
                                                
-- stdout --
	multinode-334221
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334221-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:12:38.881181   40542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.78:22: connect: no route to host
	E0416 17:12:38.881213   40542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.78:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-334221 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-334221 -n multinode-334221
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-334221 logs -n 25: (1.663412816s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221:/home/docker/cp-test_multinode-334221-m02_multinode-334221.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221 sudo cat                                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m02_multinode-334221.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03:/home/docker/cp-test_multinode-334221-m02_multinode-334221-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221-m03 sudo cat                                   | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m02_multinode-334221-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp testdata/cp-test.txt                                                | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3051956935/001/cp-test_multinode-334221-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221:/home/docker/cp-test_multinode-334221-m03_multinode-334221.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221 sudo cat                                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m03_multinode-334221.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt                       | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m02:/home/docker/cp-test_multinode-334221-m03_multinode-334221-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n                                                                 | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | multinode-334221-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-334221 ssh -n multinode-334221-m02 sudo cat                                   | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	|         | /home/docker/cp-test_multinode-334221-m03_multinode-334221-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-334221 node stop m03                                                          | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:04 UTC |
	| node    | multinode-334221 node start                                                             | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:04 UTC | 16 Apr 24 17:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-334221                                                                | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:05 UTC |                     |
	| stop    | -p multinode-334221                                                                     | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:05 UTC |                     |
	| start   | -p multinode-334221                                                                     | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:07 UTC | 16 Apr 24 17:10 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-334221                                                                | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC |                     |
	| node    | multinode-334221 node delete                                                            | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC | 16 Apr 24 17:10 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-334221 stop                                                                   | multinode-334221 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:10 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:07:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:07:12.905612   38726 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:07:12.905741   38726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:07:12.905752   38726 out.go:304] Setting ErrFile to fd 2...
	I0416 17:07:12.905759   38726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:07:12.905969   38726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:07:12.906525   38726 out.go:298] Setting JSON to false
	I0416 17:07:12.907430   38726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2985,"bootTime":1713284248,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:07:12.907494   38726 start.go:139] virtualization: kvm guest
	I0416 17:07:12.910000   38726 out.go:177] * [multinode-334221] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:07:12.911444   38726 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:07:12.912802   38726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:07:12.911467   38726 notify.go:220] Checking for updates...
	I0416 17:07:12.914191   38726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:07:12.915788   38726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:07:12.917154   38726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:07:12.918458   38726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:07:12.920205   38726 config.go:182] Loaded profile config "multinode-334221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:07:12.920299   38726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:07:12.920697   38726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:07:12.920741   38726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:07:12.936029   38726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0416 17:07:12.936483   38726 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:07:12.937088   38726 main.go:141] libmachine: Using API Version  1
	I0416 17:07:12.937117   38726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:07:12.937436   38726 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:07:12.937610   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:07:12.974566   38726 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:07:12.975929   38726 start.go:297] selected driver: kvm2
	I0416 17:07:12.975941   38726 start.go:901] validating driver "kvm2" against &{Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:07:12.976075   38726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:07:12.976406   38726 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:07:12.976471   38726 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:07:12.991966   38726 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:07:12.992637   38726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:07:12.992699   38726 cni.go:84] Creating CNI manager for ""
	I0416 17:07:12.992715   38726 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 17:07:12.992763   38726 start.go:340] cluster config:
	{Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-334221 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:07:12.992903   38726 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:07:12.995424   38726 out.go:177] * Starting "multinode-334221" primary control-plane node in "multinode-334221" cluster
	I0416 17:07:12.996756   38726 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:07:12.996792   38726 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:07:12.996806   38726 cache.go:56] Caching tarball of preloaded images
	I0416 17:07:12.996907   38726 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:07:12.996920   38726 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:07:12.997047   38726 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/config.json ...
	I0416 17:07:12.997256   38726 start.go:360] acquireMachinesLock for multinode-334221: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:07:12.997322   38726 start.go:364] duration metric: took 47.31µs to acquireMachinesLock for "multinode-334221"
	I0416 17:07:12.997340   38726 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:07:12.997353   38726 fix.go:54] fixHost starting: 
	I0416 17:07:12.997607   38726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:07:12.997655   38726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:07:13.011720   38726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0416 17:07:13.012174   38726 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:07:13.012671   38726 main.go:141] libmachine: Using API Version  1
	I0416 17:07:13.012698   38726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:07:13.013009   38726 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:07:13.013181   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:07:13.013343   38726 main.go:141] libmachine: (multinode-334221) Calling .GetState
	I0416 17:07:13.015079   38726 fix.go:112] recreateIfNeeded on multinode-334221: state=Running err=<nil>
	W0416 17:07:13.015113   38726 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:07:13.017345   38726 out.go:177] * Updating the running kvm2 "multinode-334221" VM ...
	I0416 17:07:13.018706   38726 machine.go:94] provisionDockerMachine start ...
	I0416 17:07:13.018725   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:07:13.019210   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.022708   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.023204   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.023237   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.023379   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.023567   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.023736   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.023885   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.024018   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.024201   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.024214   38726 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:07:13.134859   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334221
	
	I0416 17:07:13.134891   38726 main.go:141] libmachine: (multinode-334221) Calling .GetMachineName
	I0416 17:07:13.135135   38726 buildroot.go:166] provisioning hostname "multinode-334221"
	I0416 17:07:13.135163   38726 main.go:141] libmachine: (multinode-334221) Calling .GetMachineName
	I0416 17:07:13.135361   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.137937   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.138358   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.138383   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.138520   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.138692   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.138842   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.138979   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.139130   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.139283   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.139296   38726 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-334221 && echo "multinode-334221" | sudo tee /etc/hostname
	I0416 17:07:13.271297   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334221
	
	I0416 17:07:13.271322   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.274259   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.274640   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.274672   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.274860   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.275059   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.275226   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.275348   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.275483   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.275686   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.275703   38726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-334221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-334221/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-334221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:07:13.382275   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:07:13.382307   38726 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:07:13.382346   38726 buildroot.go:174] setting up certificates
	I0416 17:07:13.382364   38726 provision.go:84] configureAuth start
	I0416 17:07:13.382375   38726 main.go:141] libmachine: (multinode-334221) Calling .GetMachineName
	I0416 17:07:13.382684   38726 main.go:141] libmachine: (multinode-334221) Calling .GetIP
	I0416 17:07:13.385558   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.385934   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.385955   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.386157   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.388263   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.388629   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.388665   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.388820   38726 provision.go:143] copyHostCerts
	I0416 17:07:13.388862   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:07:13.388895   38726 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:07:13.388910   38726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:07:13.388975   38726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:07:13.389060   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:07:13.389078   38726 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:07:13.389085   38726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:07:13.389109   38726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:07:13.389154   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:07:13.389170   38726 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:07:13.389176   38726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:07:13.389196   38726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:07:13.389241   38726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.multinode-334221 san=[127.0.0.1 192.168.39.137 localhost minikube multinode-334221]
	I0416 17:07:13.491044   38726 provision.go:177] copyRemoteCerts
	I0416 17:07:13.491102   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:07:13.491134   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.493772   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.494120   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.494156   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.494302   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.494486   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.494644   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.494772   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:07:13.576629   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 17:07:13.576691   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 17:07:13.609484   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 17:07:13.609547   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:07:13.638084   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 17:07:13.638145   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:07:13.668815   38726 provision.go:87] duration metric: took 286.437773ms to configureAuth
	I0416 17:07:13.668865   38726 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:07:13.669075   38726 config.go:182] Loaded profile config "multinode-334221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:07:13.669144   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:07:13.671938   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.672313   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:07:13.672339   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:07:13.672521   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:07:13.672713   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.672872   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:07:13.672994   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:07:13.673117   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:07:13.673286   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:07:13.673306   38726 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:08:44.605949   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:08:44.605977   38726 machine.go:97] duration metric: took 1m31.587259535s to provisionDockerMachine
	I0416 17:08:44.605992   38726 start.go:293] postStartSetup for "multinode-334221" (driver="kvm2")
	I0416 17:08:44.606005   38726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:08:44.606027   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.606377   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:08:44.606422   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.609152   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.609529   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.609559   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.609683   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.609871   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.610036   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.610192   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:08:44.716426   38726 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:08:44.721284   38726 command_runner.go:130] > NAME=Buildroot
	I0416 17:08:44.721302   38726 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 17:08:44.721307   38726 command_runner.go:130] > ID=buildroot
	I0416 17:08:44.721311   38726 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 17:08:44.721316   38726 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 17:08:44.721550   38726 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:08:44.721574   38726 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:08:44.721629   38726 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:08:44.721711   38726 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:08:44.721722   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /etc/ssl/certs/109102.pem
	I0416 17:08:44.721814   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:08:44.734372   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:08:44.764101   38726 start.go:296] duration metric: took 158.096587ms for postStartSetup
	I0416 17:08:44.764144   38726 fix.go:56] duration metric: took 1m31.766797827s for fixHost
	I0416 17:08:44.764162   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.766836   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.767312   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.767342   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.767461   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.767655   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.767837   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.768021   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.768211   38726 main.go:141] libmachine: Using SSH client type: native
	I0416 17:08:44.768361   38726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0416 17:08:44.768371   38726 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:08:44.870116   38726 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713287324.852816822
	
	I0416 17:08:44.870147   38726 fix.go:216] guest clock: 1713287324.852816822
	I0416 17:08:44.870158   38726 fix.go:229] Guest: 2024-04-16 17:08:44.852816822 +0000 UTC Remote: 2024-04-16 17:08:44.764148197 +0000 UTC m=+91.905067067 (delta=88.668625ms)
	I0416 17:08:44.870186   38726 fix.go:200] guest clock delta is within tolerance: 88.668625ms
	I0416 17:08:44.870193   38726 start.go:83] releasing machines lock for "multinode-334221", held for 1m31.872860229s
	I0416 17:08:44.870218   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.870487   38726 main.go:141] libmachine: (multinode-334221) Calling .GetIP
	I0416 17:08:44.872873   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.873217   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.873240   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.873408   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.874047   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.874238   38726 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:08:44.874319   38726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:08:44.874364   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.874468   38726 ssh_runner.go:195] Run: cat /version.json
	I0416 17:08:44.874492   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:08:44.876878   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877168   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.877195   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877218   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877358   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.877529   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.877676   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.877682   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:44.877705   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:44.877837   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:08:44.877849   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:08:44.877947   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:08:44.878091   38726 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:08:44.878297   38726 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:08:44.974557   38726 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 17:08:44.975292   38726 command_runner.go:130] > {"iso_version": "v1.33.0-1713236417-18649", "kicbase_version": "v0.0.43-1713215244-18647", "minikube_version": "v1.33.0-beta.0", "commit": "4ec1a3e88a9f3ffb3930e555284d907468ae83a6"}
	I0416 17:08:44.975444   38726 ssh_runner.go:195] Run: systemctl --version
	I0416 17:08:44.983215   38726 command_runner.go:130] > systemd 252 (252)
	I0416 17:08:44.983247   38726 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 17:08:44.983505   38726 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:08:45.152810   38726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 17:08:45.162874   38726 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 17:08:45.163320   38726 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:08:45.163399   38726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:08:45.173978   38726 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 17:08:45.174017   38726 start.go:494] detecting cgroup driver to use...
	I0416 17:08:45.174094   38726 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:08:45.192938   38726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:08:45.209118   38726 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:08:45.209178   38726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:08:45.224855   38726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:08:45.241963   38726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:08:45.402793   38726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:08:45.558710   38726 docker.go:233] disabling docker service ...
	I0416 17:08:45.558800   38726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:08:45.575579   38726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:08:45.590194   38726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:08:45.732928   38726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:08:45.886383   38726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:08:45.901910   38726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:08:45.924793   38726 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0416 17:08:45.924849   38726 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:08:45.924905   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.936588   38726 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:08:45.936654   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.948577   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.960266   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.973162   38726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:08:45.985110   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:45.997105   38726 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:46.011301   38726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:08:46.023318   38726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:08:46.033633   38726 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 17:08:46.033697   38726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:08:46.043983   38726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:08:46.186854   38726 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:08:46.444495   38726 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:08:46.444569   38726 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:08:46.451729   38726 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0416 17:08:46.451751   38726 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 17:08:46.451757   38726 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0416 17:08:46.451764   38726 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:08:46.451769   38726 command_runner.go:130] > Access: 2024-04-16 17:08:46.388176059 +0000
	I0416 17:08:46.451786   38726 command_runner.go:130] > Modify: 2024-04-16 17:08:46.316172959 +0000
	I0416 17:08:46.451794   38726 command_runner.go:130] > Change: 2024-04-16 17:08:46.316172959 +0000
	I0416 17:08:46.451803   38726 command_runner.go:130] >  Birth: -
	I0416 17:08:46.451822   38726 start.go:562] Will wait 60s for crictl version
	I0416 17:08:46.451886   38726 ssh_runner.go:195] Run: which crictl
	I0416 17:08:46.456722   38726 command_runner.go:130] > /usr/bin/crictl
	I0416 17:08:46.456797   38726 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:08:46.499963   38726 command_runner.go:130] > Version:  0.1.0
	I0416 17:08:46.499984   38726 command_runner.go:130] > RuntimeName:  cri-o
	I0416 17:08:46.499989   38726 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0416 17:08:46.499994   38726 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 17:08:46.500148   38726 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:08:46.500226   38726 ssh_runner.go:195] Run: crio --version
	I0416 17:08:46.530148   38726 command_runner.go:130] > crio version 1.29.1
	I0416 17:08:46.530169   38726 command_runner.go:130] > Version:        1.29.1
	I0416 17:08:46.530175   38726 command_runner.go:130] > GitCommit:      unknown
	I0416 17:08:46.530179   38726 command_runner.go:130] > GitCommitDate:  unknown
	I0416 17:08:46.530183   38726 command_runner.go:130] > GitTreeState:   clean
	I0416 17:08:46.530189   38726 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0416 17:08:46.530193   38726 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 17:08:46.530196   38726 command_runner.go:130] > Compiler:       gc
	I0416 17:08:46.530201   38726 command_runner.go:130] > Platform:       linux/amd64
	I0416 17:08:46.530205   38726 command_runner.go:130] > Linkmode:       dynamic
	I0416 17:08:46.530222   38726 command_runner.go:130] > BuildTags:      
	I0416 17:08:46.530226   38726 command_runner.go:130] >   containers_image_ostree_stub
	I0416 17:08:46.530233   38726 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 17:08:46.530237   38726 command_runner.go:130] >   btrfs_noversion
	I0416 17:08:46.530241   38726 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 17:08:46.530246   38726 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 17:08:46.530249   38726 command_runner.go:130] >   seccomp
	I0416 17:08:46.530253   38726 command_runner.go:130] > LDFlags:          unknown
	I0416 17:08:46.530261   38726 command_runner.go:130] > SeccompEnabled:   true
	I0416 17:08:46.530265   38726 command_runner.go:130] > AppArmorEnabled:  false
	I0416 17:08:46.531566   38726 ssh_runner.go:195] Run: crio --version
	I0416 17:08:46.563207   38726 command_runner.go:130] > crio version 1.29.1
	I0416 17:08:46.563228   38726 command_runner.go:130] > Version:        1.29.1
	I0416 17:08:46.563233   38726 command_runner.go:130] > GitCommit:      unknown
	I0416 17:08:46.563238   38726 command_runner.go:130] > GitCommitDate:  unknown
	I0416 17:08:46.563242   38726 command_runner.go:130] > GitTreeState:   clean
	I0416 17:08:46.563247   38726 command_runner.go:130] > BuildDate:      2024-04-16T08:37:30Z
	I0416 17:08:46.563251   38726 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 17:08:46.563255   38726 command_runner.go:130] > Compiler:       gc
	I0416 17:08:46.563260   38726 command_runner.go:130] > Platform:       linux/amd64
	I0416 17:08:46.563264   38726 command_runner.go:130] > Linkmode:       dynamic
	I0416 17:08:46.563268   38726 command_runner.go:130] > BuildTags:      
	I0416 17:08:46.563273   38726 command_runner.go:130] >   containers_image_ostree_stub
	I0416 17:08:46.563277   38726 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 17:08:46.563281   38726 command_runner.go:130] >   btrfs_noversion
	I0416 17:08:46.563286   38726 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 17:08:46.563291   38726 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 17:08:46.563299   38726 command_runner.go:130] >   seccomp
	I0416 17:08:46.563305   38726 command_runner.go:130] > LDFlags:          unknown
	I0416 17:08:46.563311   38726 command_runner.go:130] > SeccompEnabled:   true
	I0416 17:08:46.563318   38726 command_runner.go:130] > AppArmorEnabled:  false
	I0416 17:08:46.567393   38726 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:08:46.568891   38726 main.go:141] libmachine: (multinode-334221) Calling .GetIP
	I0416 17:08:46.571433   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:46.571781   38726 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:08:46.571812   38726 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:08:46.572034   38726 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:08:46.576590   38726 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0416 17:08:46.576729   38726 kubeadm.go:877] updating cluster {Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:08:46.576873   38726 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:08:46.576920   38726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:08:46.619187   38726 command_runner.go:130] > {
	I0416 17:08:46.619205   38726 command_runner.go:130] >   "images": [
	I0416 17:08:46.619209   38726 command_runner.go:130] >     {
	I0416 17:08:46.619217   38726 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 17:08:46.619221   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619228   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 17:08:46.619231   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619236   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619244   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 17:08:46.619253   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 17:08:46.619256   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619262   38726 command_runner.go:130] >       "size": "65291810",
	I0416 17:08:46.619266   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619270   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619279   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619284   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619289   38726 command_runner.go:130] >     },
	I0416 17:08:46.619293   38726 command_runner.go:130] >     {
	I0416 17:08:46.619306   38726 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 17:08:46.619310   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619315   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 17:08:46.619319   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619325   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619332   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 17:08:46.619344   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 17:08:46.619348   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619352   38726 command_runner.go:130] >       "size": "1363676",
	I0416 17:08:46.619356   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619362   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619367   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619371   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619375   38726 command_runner.go:130] >     },
	I0416 17:08:46.619378   38726 command_runner.go:130] >     {
	I0416 17:08:46.619385   38726 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 17:08:46.619389   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619394   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 17:08:46.619398   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619402   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619410   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 17:08:46.619418   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 17:08:46.619422   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619430   38726 command_runner.go:130] >       "size": "31470524",
	I0416 17:08:46.619434   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619438   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619441   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619445   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619449   38726 command_runner.go:130] >     },
	I0416 17:08:46.619452   38726 command_runner.go:130] >     {
	I0416 17:08:46.619458   38726 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 17:08:46.619464   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619469   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 17:08:46.619474   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619477   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619485   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 17:08:46.619496   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 17:08:46.619501   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619505   38726 command_runner.go:130] >       "size": "61245718",
	I0416 17:08:46.619508   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619513   38726 command_runner.go:130] >       "username": "nonroot",
	I0416 17:08:46.619517   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619521   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619527   38726 command_runner.go:130] >     },
	I0416 17:08:46.619531   38726 command_runner.go:130] >     {
	I0416 17:08:46.619536   38726 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 17:08:46.619543   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619547   38726 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 17:08:46.619553   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619557   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619566   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 17:08:46.619575   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 17:08:46.619581   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619585   38726 command_runner.go:130] >       "size": "150779692",
	I0416 17:08:46.619591   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619595   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619600   38726 command_runner.go:130] >       },
	I0416 17:08:46.619604   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619608   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619612   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619615   38726 command_runner.go:130] >     },
	I0416 17:08:46.619619   38726 command_runner.go:130] >     {
	I0416 17:08:46.619625   38726 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 17:08:46.619629   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619634   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 17:08:46.619647   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619651   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619657   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 17:08:46.619664   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 17:08:46.619666   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619670   38726 command_runner.go:130] >       "size": "128508878",
	I0416 17:08:46.619674   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619678   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619681   38726 command_runner.go:130] >       },
	I0416 17:08:46.619685   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619688   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619692   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619695   38726 command_runner.go:130] >     },
	I0416 17:08:46.619700   38726 command_runner.go:130] >     {
	I0416 17:08:46.619706   38726 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 17:08:46.619713   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619718   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 17:08:46.619723   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619727   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619736   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 17:08:46.619746   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 17:08:46.619752   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619756   38726 command_runner.go:130] >       "size": "123142962",
	I0416 17:08:46.619762   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619767   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619772   38726 command_runner.go:130] >       },
	I0416 17:08:46.619776   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619780   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619786   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619790   38726 command_runner.go:130] >     },
	I0416 17:08:46.619795   38726 command_runner.go:130] >     {
	I0416 17:08:46.619801   38726 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 17:08:46.619808   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619813   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 17:08:46.619818   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619822   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619838   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 17:08:46.619847   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 17:08:46.619851   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619855   38726 command_runner.go:130] >       "size": "83634073",
	I0416 17:08:46.619859   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.619862   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619866   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619870   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619873   38726 command_runner.go:130] >     },
	I0416 17:08:46.619876   38726 command_runner.go:130] >     {
	I0416 17:08:46.619882   38726 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 17:08:46.619885   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619890   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 17:08:46.619894   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619898   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619905   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 17:08:46.619912   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 17:08:46.619916   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619919   38726 command_runner.go:130] >       "size": "60724018",
	I0416 17:08:46.619923   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619926   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.619929   38726 command_runner.go:130] >       },
	I0416 17:08:46.619933   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.619936   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.619940   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.619943   38726 command_runner.go:130] >     },
	I0416 17:08:46.619947   38726 command_runner.go:130] >     {
	I0416 17:08:46.619953   38726 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 17:08:46.619956   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.619960   38726 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 17:08:46.619963   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619966   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.619973   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 17:08:46.619979   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 17:08:46.619983   38726 command_runner.go:130] >       ],
	I0416 17:08:46.619987   38726 command_runner.go:130] >       "size": "750414",
	I0416 17:08:46.619991   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.619994   38726 command_runner.go:130] >         "value": "65535"
	I0416 17:08:46.619998   38726 command_runner.go:130] >       },
	I0416 17:08:46.620002   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.620006   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.620010   38726 command_runner.go:130] >       "pinned": true
	I0416 17:08:46.620013   38726 command_runner.go:130] >     }
	I0416 17:08:46.620016   38726 command_runner.go:130] >   ]
	I0416 17:08:46.620019   38726 command_runner.go:130] > }
	I0416 17:08:46.620384   38726 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:08:46.620395   38726 crio.go:433] Images already preloaded, skipping extraction
	I0416 17:08:46.620435   38726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:08:46.654222   38726 command_runner.go:130] > {
	I0416 17:08:46.654239   38726 command_runner.go:130] >   "images": [
	I0416 17:08:46.654243   38726 command_runner.go:130] >     {
	I0416 17:08:46.654252   38726 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 17:08:46.654259   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654265   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 17:08:46.654269   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654275   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654283   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 17:08:46.654291   38726 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 17:08:46.654300   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654308   38726 command_runner.go:130] >       "size": "65291810",
	I0416 17:08:46.654312   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654316   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654333   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654340   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654344   38726 command_runner.go:130] >     },
	I0416 17:08:46.654349   38726 command_runner.go:130] >     {
	I0416 17:08:46.654355   38726 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 17:08:46.654361   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654367   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 17:08:46.654373   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654378   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654387   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 17:08:46.654396   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 17:08:46.654401   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654406   38726 command_runner.go:130] >       "size": "1363676",
	I0416 17:08:46.654412   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654418   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654424   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654428   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654436   38726 command_runner.go:130] >     },
	I0416 17:08:46.654442   38726 command_runner.go:130] >     {
	I0416 17:08:46.654447   38726 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 17:08:46.654453   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654459   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 17:08:46.654465   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654469   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654479   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 17:08:46.654488   38726 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 17:08:46.654494   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654499   38726 command_runner.go:130] >       "size": "31470524",
	I0416 17:08:46.654505   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654508   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654515   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654519   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654524   38726 command_runner.go:130] >     },
	I0416 17:08:46.654528   38726 command_runner.go:130] >     {
	I0416 17:08:46.654536   38726 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 17:08:46.654542   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654547   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 17:08:46.654553   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654557   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654567   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 17:08:46.654580   38726 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 17:08:46.654586   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654590   38726 command_runner.go:130] >       "size": "61245718",
	I0416 17:08:46.654596   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654601   38726 command_runner.go:130] >       "username": "nonroot",
	I0416 17:08:46.654609   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654616   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654619   38726 command_runner.go:130] >     },
	I0416 17:08:46.654626   38726 command_runner.go:130] >     {
	I0416 17:08:46.654632   38726 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 17:08:46.654638   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654643   38726 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 17:08:46.654646   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654651   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654661   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 17:08:46.654667   38726 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 17:08:46.654673   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654677   38726 command_runner.go:130] >       "size": "150779692",
	I0416 17:08:46.654680   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.654684   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.654687   38726 command_runner.go:130] >       },
	I0416 17:08:46.654691   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654695   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654701   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654705   38726 command_runner.go:130] >     },
	I0416 17:08:46.654709   38726 command_runner.go:130] >     {
	I0416 17:08:46.654716   38726 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 17:08:46.654722   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654727   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 17:08:46.654733   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654737   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654743   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 17:08:46.654752   38726 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 17:08:46.654758   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654762   38726 command_runner.go:130] >       "size": "128508878",
	I0416 17:08:46.654768   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.654772   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.654778   38726 command_runner.go:130] >       },
	I0416 17:08:46.654782   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654788   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654791   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654797   38726 command_runner.go:130] >     },
	I0416 17:08:46.654800   38726 command_runner.go:130] >     {
	I0416 17:08:46.654808   38726 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 17:08:46.654815   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654820   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 17:08:46.654826   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654830   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654837   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 17:08:46.654847   38726 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 17:08:46.654856   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654861   38726 command_runner.go:130] >       "size": "123142962",
	I0416 17:08:46.654867   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.654871   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.654874   38726 command_runner.go:130] >       },
	I0416 17:08:46.654878   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654882   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654885   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654888   38726 command_runner.go:130] >     },
	I0416 17:08:46.654891   38726 command_runner.go:130] >     {
	I0416 17:08:46.654897   38726 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 17:08:46.654903   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654908   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 17:08:46.654914   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654917   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.654929   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 17:08:46.654938   38726 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 17:08:46.654941   38726 command_runner.go:130] >       ],
	I0416 17:08:46.654948   38726 command_runner.go:130] >       "size": "83634073",
	I0416 17:08:46.654955   38726 command_runner.go:130] >       "uid": null,
	I0416 17:08:46.654959   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.654965   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.654969   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.654975   38726 command_runner.go:130] >     },
	I0416 17:08:46.654978   38726 command_runner.go:130] >     {
	I0416 17:08:46.654987   38726 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 17:08:46.654991   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.654996   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 17:08:46.655002   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655006   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.655015   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 17:08:46.655026   38726 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 17:08:46.655032   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655037   38726 command_runner.go:130] >       "size": "60724018",
	I0416 17:08:46.655043   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.655047   38726 command_runner.go:130] >         "value": "0"
	I0416 17:08:46.655053   38726 command_runner.go:130] >       },
	I0416 17:08:46.655058   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.655064   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.655068   38726 command_runner.go:130] >       "pinned": false
	I0416 17:08:46.655071   38726 command_runner.go:130] >     },
	I0416 17:08:46.655074   38726 command_runner.go:130] >     {
	I0416 17:08:46.655083   38726 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 17:08:46.655087   38726 command_runner.go:130] >       "repoTags": [
	I0416 17:08:46.655094   38726 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 17:08:46.655097   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655101   38726 command_runner.go:130] >       "repoDigests": [
	I0416 17:08:46.655109   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 17:08:46.655121   38726 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 17:08:46.655127   38726 command_runner.go:130] >       ],
	I0416 17:08:46.655131   38726 command_runner.go:130] >       "size": "750414",
	I0416 17:08:46.655135   38726 command_runner.go:130] >       "uid": {
	I0416 17:08:46.655141   38726 command_runner.go:130] >         "value": "65535"
	I0416 17:08:46.655145   38726 command_runner.go:130] >       },
	I0416 17:08:46.655149   38726 command_runner.go:130] >       "username": "",
	I0416 17:08:46.655153   38726 command_runner.go:130] >       "spec": null,
	I0416 17:08:46.655160   38726 command_runner.go:130] >       "pinned": true
	I0416 17:08:46.655163   38726 command_runner.go:130] >     }
	I0416 17:08:46.655166   38726 command_runner.go:130] >   ]
	I0416 17:08:46.655171   38726 command_runner.go:130] > }
	I0416 17:08:46.655635   38726 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:08:46.655649   38726 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:08:46.655657   38726 kubeadm.go:928] updating node { 192.168.39.137 8443 v1.29.3 crio true true} ...
	I0416 17:08:46.655746   38726 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-334221 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:08:46.655804   38726 ssh_runner.go:195] Run: crio config
	I0416 17:08:46.698698   38726 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0416 17:08:46.698721   38726 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0416 17:08:46.698728   38726 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0416 17:08:46.698732   38726 command_runner.go:130] > #
	I0416 17:08:46.698739   38726 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0416 17:08:46.698745   38726 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0416 17:08:46.698750   38726 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0416 17:08:46.698757   38726 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0416 17:08:46.698761   38726 command_runner.go:130] > # reload'.
	I0416 17:08:46.698767   38726 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0416 17:08:46.698776   38726 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0416 17:08:46.698786   38726 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0416 17:08:46.698794   38726 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0416 17:08:46.698806   38726 command_runner.go:130] > [crio]
	I0416 17:08:46.698815   38726 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0416 17:08:46.698824   38726 command_runner.go:130] > # containers images, in this directory.
	I0416 17:08:46.698829   38726 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0416 17:08:46.698848   38726 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0416 17:08:46.698856   38726 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0416 17:08:46.698865   38726 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0416 17:08:46.698872   38726 command_runner.go:130] > # imagestore = ""
	I0416 17:08:46.698881   38726 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0416 17:08:46.698895   38726 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0416 17:08:46.698905   38726 command_runner.go:130] > storage_driver = "overlay"
	I0416 17:08:46.698912   38726 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0416 17:08:46.698918   38726 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0416 17:08:46.698922   38726 command_runner.go:130] > storage_option = [
	I0416 17:08:46.698927   38726 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0416 17:08:46.698930   38726 command_runner.go:130] > ]
	I0416 17:08:46.698938   38726 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0416 17:08:46.698943   38726 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0416 17:08:46.698950   38726 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0416 17:08:46.698955   38726 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0416 17:08:46.698961   38726 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0416 17:08:46.698969   38726 command_runner.go:130] > # always happen on a node reboot
	I0416 17:08:46.698973   38726 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0416 17:08:46.698984   38726 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0416 17:08:46.698990   38726 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0416 17:08:46.698995   38726 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0416 17:08:46.699000   38726 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0416 17:08:46.699008   38726 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0416 17:08:46.699017   38726 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0416 17:08:46.699022   38726 command_runner.go:130] > # internal_wipe = true
	I0416 17:08:46.699030   38726 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0416 17:08:46.699042   38726 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0416 17:08:46.699052   38726 command_runner.go:130] > # internal_repair = false
	I0416 17:08:46.699061   38726 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0416 17:08:46.699074   38726 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0416 17:08:46.699083   38726 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0416 17:08:46.699089   38726 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0416 17:08:46.699097   38726 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0416 17:08:46.699100   38726 command_runner.go:130] > [crio.api]
	I0416 17:08:46.699105   38726 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0416 17:08:46.699114   38726 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0416 17:08:46.699129   38726 command_runner.go:130] > # IP address on which the stream server will listen.
	I0416 17:08:46.699143   38726 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0416 17:08:46.699158   38726 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0416 17:08:46.699169   38726 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0416 17:08:46.699177   38726 command_runner.go:130] > # stream_port = "0"
	I0416 17:08:46.699186   38726 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0416 17:08:46.699195   38726 command_runner.go:130] > # stream_enable_tls = false
	I0416 17:08:46.699204   38726 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0416 17:08:46.699213   38726 command_runner.go:130] > # stream_idle_timeout = ""
	I0416 17:08:46.699224   38726 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0416 17:08:46.699238   38726 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0416 17:08:46.699247   38726 command_runner.go:130] > # minutes.
	I0416 17:08:46.699253   38726 command_runner.go:130] > # stream_tls_cert = ""
	I0416 17:08:46.699266   38726 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0416 17:08:46.699277   38726 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0416 17:08:46.699283   38726 command_runner.go:130] > # stream_tls_key = ""
	I0416 17:08:46.699292   38726 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0416 17:08:46.699323   38726 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0416 17:08:46.699338   38726 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0416 17:08:46.699348   38726 command_runner.go:130] > # stream_tls_ca = ""
	I0416 17:08:46.699360   38726 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 17:08:46.699370   38726 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0416 17:08:46.699421   38726 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 17:08:46.699448   38726 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0416 17:08:46.699460   38726 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0416 17:08:46.699469   38726 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0416 17:08:46.699480   38726 command_runner.go:130] > [crio.runtime]
	I0416 17:08:46.699493   38726 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0416 17:08:46.699504   38726 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0416 17:08:46.699512   38726 command_runner.go:130] > # "nofile=1024:2048"
	I0416 17:08:46.699522   38726 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0416 17:08:46.699532   38726 command_runner.go:130] > # default_ulimits = [
	I0416 17:08:46.699538   38726 command_runner.go:130] > # ]
	I0416 17:08:46.699546   38726 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0416 17:08:46.699556   38726 command_runner.go:130] > # no_pivot = false
	I0416 17:08:46.699565   38726 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0416 17:08:46.699579   38726 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0416 17:08:46.699600   38726 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0416 17:08:46.699615   38726 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0416 17:08:46.699627   38726 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0416 17:08:46.699642   38726 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 17:08:46.699649   38726 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0416 17:08:46.699660   38726 command_runner.go:130] > # Cgroup setting for conmon
	I0416 17:08:46.699670   38726 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0416 17:08:46.699682   38726 command_runner.go:130] > conmon_cgroup = "pod"
	I0416 17:08:46.699693   38726 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0416 17:08:46.699704   38726 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0416 17:08:46.699718   38726 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 17:08:46.699729   38726 command_runner.go:130] > conmon_env = [
	I0416 17:08:46.699739   38726 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 17:08:46.699747   38726 command_runner.go:130] > ]
	I0416 17:08:46.699756   38726 command_runner.go:130] > # Additional environment variables to set for all the
	I0416 17:08:46.699767   38726 command_runner.go:130] > # containers. These are overridden if set in the
	I0416 17:08:46.699777   38726 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0416 17:08:46.699788   38726 command_runner.go:130] > # default_env = [
	I0416 17:08:46.699796   38726 command_runner.go:130] > # ]
	I0416 17:08:46.699805   38726 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0416 17:08:46.699819   38726 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0416 17:08:46.699828   38726 command_runner.go:130] > # selinux = false
	I0416 17:08:46.699838   38726 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0416 17:08:46.699850   38726 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0416 17:08:46.699859   38726 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0416 17:08:46.699863   38726 command_runner.go:130] > # seccomp_profile = ""
	I0416 17:08:46.699868   38726 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0416 17:08:46.699877   38726 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0416 17:08:46.699886   38726 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0416 17:08:46.699897   38726 command_runner.go:130] > # which might increase security.
	I0416 17:08:46.699905   38726 command_runner.go:130] > # This option is currently deprecated,
	I0416 17:08:46.699918   38726 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0416 17:08:46.699928   38726 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0416 17:08:46.699939   38726 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0416 17:08:46.699955   38726 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0416 17:08:46.699967   38726 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0416 17:08:46.699979   38726 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0416 17:08:46.699990   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.700004   38726 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0416 17:08:46.700020   38726 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0416 17:08:46.700030   38726 command_runner.go:130] > # the cgroup blockio controller.
	I0416 17:08:46.700037   38726 command_runner.go:130] > # blockio_config_file = ""
	I0416 17:08:46.700050   38726 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0416 17:08:46.700059   38726 command_runner.go:130] > # blockio parameters.
	I0416 17:08:46.700066   38726 command_runner.go:130] > # blockio_reload = false
	I0416 17:08:46.700080   38726 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0416 17:08:46.700091   38726 command_runner.go:130] > # irqbalance daemon.
	I0416 17:08:46.700103   38726 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0416 17:08:46.700113   38726 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0416 17:08:46.700140   38726 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0416 17:08:46.700148   38726 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0416 17:08:46.700160   38726 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0416 17:08:46.700173   38726 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0416 17:08:46.700185   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.700193   38726 command_runner.go:130] > # rdt_config_file = ""
	I0416 17:08:46.700203   38726 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0416 17:08:46.700214   38726 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0416 17:08:46.700237   38726 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0416 17:08:46.700248   38726 command_runner.go:130] > # separate_pull_cgroup = ""
	I0416 17:08:46.700258   38726 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0416 17:08:46.700271   38726 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0416 17:08:46.700280   38726 command_runner.go:130] > # will be added.
	I0416 17:08:46.700287   38726 command_runner.go:130] > # default_capabilities = [
	I0416 17:08:46.700296   38726 command_runner.go:130] > # 	"CHOWN",
	I0416 17:08:46.700303   38726 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0416 17:08:46.700310   38726 command_runner.go:130] > # 	"FSETID",
	I0416 17:08:46.700315   38726 command_runner.go:130] > # 	"FOWNER",
	I0416 17:08:46.700318   38726 command_runner.go:130] > # 	"SETGID",
	I0416 17:08:46.700322   38726 command_runner.go:130] > # 	"SETUID",
	I0416 17:08:46.700326   38726 command_runner.go:130] > # 	"SETPCAP",
	I0416 17:08:46.700330   38726 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0416 17:08:46.700334   38726 command_runner.go:130] > # 	"KILL",
	I0416 17:08:46.700345   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700359   38726 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0416 17:08:46.700373   38726 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0416 17:08:46.700384   38726 command_runner.go:130] > # add_inheritable_capabilities = false
	I0416 17:08:46.700397   38726 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0416 17:08:46.700409   38726 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 17:08:46.700419   38726 command_runner.go:130] > default_sysctls = [
	I0416 17:08:46.700431   38726 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0416 17:08:46.700439   38726 command_runner.go:130] > ]
	I0416 17:08:46.700447   38726 command_runner.go:130] > # List of devices on the host that a
	I0416 17:08:46.700459   38726 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0416 17:08:46.700469   38726 command_runner.go:130] > # allowed_devices = [
	I0416 17:08:46.700475   38726 command_runner.go:130] > # 	"/dev/fuse",
	I0416 17:08:46.700484   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700491   38726 command_runner.go:130] > # List of additional devices. specified as
	I0416 17:08:46.700505   38726 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0416 17:08:46.700513   38726 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0416 17:08:46.700526   38726 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 17:08:46.700536   38726 command_runner.go:130] > # additional_devices = [
	I0416 17:08:46.700542   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700556   38726 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0416 17:08:46.700566   38726 command_runner.go:130] > # cdi_spec_dirs = [
	I0416 17:08:46.700571   38726 command_runner.go:130] > # 	"/etc/cdi",
	I0416 17:08:46.700578   38726 command_runner.go:130] > # 	"/var/run/cdi",
	I0416 17:08:46.700583   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700596   38726 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0416 17:08:46.700606   38726 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0416 17:08:46.700615   38726 command_runner.go:130] > # Defaults to false.
	I0416 17:08:46.700623   38726 command_runner.go:130] > # device_ownership_from_security_context = false
	I0416 17:08:46.700636   38726 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0416 17:08:46.700649   38726 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0416 17:08:46.700658   38726 command_runner.go:130] > # hooks_dir = [
	I0416 17:08:46.700666   38726 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0416 17:08:46.700675   38726 command_runner.go:130] > # ]
	I0416 17:08:46.700684   38726 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0416 17:08:46.700697   38726 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0416 17:08:46.700706   38726 command_runner.go:130] > # its default mounts from the following two files:
	I0416 17:08:46.700714   38726 command_runner.go:130] > #
	I0416 17:08:46.700724   38726 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0416 17:08:46.700737   38726 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0416 17:08:46.700748   38726 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0416 17:08:46.700756   38726 command_runner.go:130] > #
	I0416 17:08:46.700765   38726 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0416 17:08:46.700779   38726 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0416 17:08:46.700792   38726 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0416 17:08:46.700803   38726 command_runner.go:130] > #      only add mounts it finds in this file.
	I0416 17:08:46.700810   38726 command_runner.go:130] > #
	I0416 17:08:46.700816   38726 command_runner.go:130] > # default_mounts_file = ""
	I0416 17:08:46.700824   38726 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0416 17:08:46.700852   38726 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0416 17:08:46.700863   38726 command_runner.go:130] > pids_limit = 1024
	I0416 17:08:46.700873   38726 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0416 17:08:46.700885   38726 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0416 17:08:46.700897   38726 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0416 17:08:46.700912   38726 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0416 17:08:46.700922   38726 command_runner.go:130] > # log_size_max = -1
	I0416 17:08:46.700933   38726 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0416 17:08:46.700944   38726 command_runner.go:130] > # log_to_journald = false
	I0416 17:08:46.700953   38726 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0416 17:08:46.700964   38726 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0416 17:08:46.700971   38726 command_runner.go:130] > # Path to directory for container attach sockets.
	I0416 17:08:46.700981   38726 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0416 17:08:46.700986   38726 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0416 17:08:46.700996   38726 command_runner.go:130] > # bind_mount_prefix = ""
	I0416 17:08:46.701005   38726 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0416 17:08:46.701015   38726 command_runner.go:130] > # read_only = false
	I0416 17:08:46.701024   38726 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0416 17:08:46.701037   38726 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0416 17:08:46.701048   38726 command_runner.go:130] > # live configuration reload.
	I0416 17:08:46.701054   38726 command_runner.go:130] > # log_level = "info"
	I0416 17:08:46.701066   38726 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0416 17:08:46.701078   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.701090   38726 command_runner.go:130] > # log_filter = ""
	I0416 17:08:46.701103   38726 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0416 17:08:46.701122   38726 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0416 17:08:46.701131   38726 command_runner.go:130] > # separated by comma.
	I0416 17:08:46.701144   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701154   38726 command_runner.go:130] > # uid_mappings = ""
	I0416 17:08:46.701164   38726 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0416 17:08:46.701177   38726 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0416 17:08:46.701186   38726 command_runner.go:130] > # separated by comma.
	I0416 17:08:46.701198   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701208   38726 command_runner.go:130] > # gid_mappings = ""
	I0416 17:08:46.701219   38726 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0416 17:08:46.701232   38726 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 17:08:46.701248   38726 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 17:08:46.701264   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701273   38726 command_runner.go:130] > # minimum_mappable_uid = -1
	I0416 17:08:46.701282   38726 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0416 17:08:46.701295   38726 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 17:08:46.701306   38726 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 17:08:46.701320   38726 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 17:08:46.701329   38726 command_runner.go:130] > # minimum_mappable_gid = -1
	I0416 17:08:46.701338   38726 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0416 17:08:46.701352   38726 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0416 17:08:46.701364   38726 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0416 17:08:46.701373   38726 command_runner.go:130] > # ctr_stop_timeout = 30
	I0416 17:08:46.701385   38726 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0416 17:08:46.701393   38726 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0416 17:08:46.701400   38726 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0416 17:08:46.701409   38726 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0416 17:08:46.701419   38726 command_runner.go:130] > drop_infra_ctr = false
	I0416 17:08:46.701433   38726 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0416 17:08:46.701445   38726 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0416 17:08:46.701459   38726 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0416 17:08:46.701469   38726 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0416 17:08:46.701478   38726 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0416 17:08:46.701487   38726 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0416 17:08:46.701497   38726 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0416 17:08:46.701509   38726 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0416 17:08:46.701519   38726 command_runner.go:130] > # shared_cpuset = ""
	I0416 17:08:46.701529   38726 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0416 17:08:46.701540   38726 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0416 17:08:46.701550   38726 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0416 17:08:46.701564   38726 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0416 17:08:46.701572   38726 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0416 17:08:46.701582   38726 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0416 17:08:46.701595   38726 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0416 17:08:46.701605   38726 command_runner.go:130] > # enable_criu_support = false
	I0416 17:08:46.701613   38726 command_runner.go:130] > # Enable/disable the generation of the container,
	I0416 17:08:46.701629   38726 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0416 17:08:46.701638   38726 command_runner.go:130] > # enable_pod_events = false
	I0416 17:08:46.701648   38726 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 17:08:46.701657   38726 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 17:08:46.701663   38726 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0416 17:08:46.701673   38726 command_runner.go:130] > # default_runtime = "runc"
	I0416 17:08:46.701682   38726 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0416 17:08:46.701695   38726 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0416 17:08:46.701712   38726 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0416 17:08:46.701723   38726 command_runner.go:130] > # creation as a file is not desired either.
	I0416 17:08:46.701738   38726 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0416 17:08:46.701746   38726 command_runner.go:130] > # the hostname is being managed dynamically.
	I0416 17:08:46.701753   38726 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0416 17:08:46.701762   38726 command_runner.go:130] > # ]
	I0416 17:08:46.701772   38726 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0416 17:08:46.701786   38726 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0416 17:08:46.701798   38726 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0416 17:08:46.701809   38726 command_runner.go:130] > # Each entry in the table should follow the format:
	I0416 17:08:46.701817   38726 command_runner.go:130] > #
	I0416 17:08:46.701825   38726 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0416 17:08:46.701833   38726 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0416 17:08:46.701882   38726 command_runner.go:130] > # runtime_type = "oci"
	I0416 17:08:46.701898   38726 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0416 17:08:46.701906   38726 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0416 17:08:46.701913   38726 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0416 17:08:46.701921   38726 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0416 17:08:46.701926   38726 command_runner.go:130] > # monitor_env = []
	I0416 17:08:46.701937   38726 command_runner.go:130] > # privileged_without_host_devices = false
	I0416 17:08:46.701946   38726 command_runner.go:130] > # allowed_annotations = []
	I0416 17:08:46.701959   38726 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0416 17:08:46.701968   38726 command_runner.go:130] > # Where:
	I0416 17:08:46.701979   38726 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0416 17:08:46.701992   38726 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0416 17:08:46.702002   38726 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0416 17:08:46.702011   38726 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0416 17:08:46.702019   38726 command_runner.go:130] > #   in $PATH.
	I0416 17:08:46.702033   38726 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0416 17:08:46.702045   38726 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0416 17:08:46.702062   38726 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0416 17:08:46.702070   38726 command_runner.go:130] > #   state.
	I0416 17:08:46.702080   38726 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0416 17:08:46.702090   38726 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0416 17:08:46.702102   38726 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0416 17:08:46.702114   38726 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0416 17:08:46.702133   38726 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0416 17:08:46.702146   38726 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0416 17:08:46.702160   38726 command_runner.go:130] > #   The currently recognized values are:
	I0416 17:08:46.702173   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0416 17:08:46.702183   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0416 17:08:46.702196   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0416 17:08:46.702209   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0416 17:08:46.702224   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0416 17:08:46.702237   38726 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0416 17:08:46.702250   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0416 17:08:46.702261   38726 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0416 17:08:46.702271   38726 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0416 17:08:46.702284   38726 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0416 17:08:46.702295   38726 command_runner.go:130] > #   deprecated option "conmon".
	I0416 17:08:46.702308   38726 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0416 17:08:46.702319   38726 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0416 17:08:46.702334   38726 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0416 17:08:46.702343   38726 command_runner.go:130] > #   should be moved to the container's cgroup
	I0416 17:08:46.702352   38726 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0416 17:08:46.702363   38726 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0416 17:08:46.702377   38726 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0416 17:08:46.702388   38726 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0416 17:08:46.702396   38726 command_runner.go:130] > #
	I0416 17:08:46.702407   38726 command_runner.go:130] > # Using the seccomp notifier feature:
	I0416 17:08:46.702416   38726 command_runner.go:130] > #
	I0416 17:08:46.702426   38726 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0416 17:08:46.702436   38726 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0416 17:08:46.702443   38726 command_runner.go:130] > #
	I0416 17:08:46.702455   38726 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0416 17:08:46.702469   38726 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0416 17:08:46.702477   38726 command_runner.go:130] > #
	I0416 17:08:46.702491   38726 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0416 17:08:46.702500   38726 command_runner.go:130] > # feature.
	I0416 17:08:46.702504   38726 command_runner.go:130] > #
	I0416 17:08:46.702516   38726 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0416 17:08:46.702524   38726 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0416 17:08:46.702536   38726 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0416 17:08:46.702549   38726 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0416 17:08:46.702562   38726 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0416 17:08:46.702571   38726 command_runner.go:130] > #
	I0416 17:08:46.702581   38726 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0416 17:08:46.702593   38726 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0416 17:08:46.702598   38726 command_runner.go:130] > #
	I0416 17:08:46.702607   38726 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0416 17:08:46.702613   38726 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0416 17:08:46.702621   38726 command_runner.go:130] > #
	I0416 17:08:46.702639   38726 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0416 17:08:46.702652   38726 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0416 17:08:46.702660   38726 command_runner.go:130] > # limitation.
	I0416 17:08:46.702670   38726 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0416 17:08:46.702680   38726 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0416 17:08:46.702687   38726 command_runner.go:130] > runtime_type = "oci"
	I0416 17:08:46.702696   38726 command_runner.go:130] > runtime_root = "/run/runc"
	I0416 17:08:46.702704   38726 command_runner.go:130] > runtime_config_path = ""
	I0416 17:08:46.702715   38726 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0416 17:08:46.702726   38726 command_runner.go:130] > monitor_cgroup = "pod"
	I0416 17:08:46.702736   38726 command_runner.go:130] > monitor_exec_cgroup = ""
	I0416 17:08:46.702745   38726 command_runner.go:130] > monitor_env = [
	I0416 17:08:46.702756   38726 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 17:08:46.702764   38726 command_runner.go:130] > ]
	I0416 17:08:46.702772   38726 command_runner.go:130] > privileged_without_host_devices = false
	I0416 17:08:46.702782   38726 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0416 17:08:46.702792   38726 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0416 17:08:46.702805   38726 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0416 17:08:46.702820   38726 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0416 17:08:46.702835   38726 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0416 17:08:46.702847   38726 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0416 17:08:46.702865   38726 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0416 17:08:46.702876   38726 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0416 17:08:46.702885   38726 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0416 17:08:46.702891   38726 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0416 17:08:46.702899   38726 command_runner.go:130] > # Example:
	I0416 17:08:46.702907   38726 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0416 17:08:46.702920   38726 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0416 17:08:46.702930   38726 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0416 17:08:46.702938   38726 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0416 17:08:46.702947   38726 command_runner.go:130] > # cpuset = 0
	I0416 17:08:46.702955   38726 command_runner.go:130] > # cpushares = "0-1"
	I0416 17:08:46.702961   38726 command_runner.go:130] > # Where:
	I0416 17:08:46.702971   38726 command_runner.go:130] > # The workload name is workload-type.
	I0416 17:08:46.702981   38726 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0416 17:08:46.702988   38726 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0416 17:08:46.702995   38726 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0416 17:08:46.703003   38726 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0416 17:08:46.703011   38726 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0416 17:08:46.703018   38726 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0416 17:08:46.703025   38726 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0416 17:08:46.703032   38726 command_runner.go:130] > # Default value is set to true
	I0416 17:08:46.703040   38726 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0416 17:08:46.703052   38726 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0416 17:08:46.703063   38726 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0416 17:08:46.703074   38726 command_runner.go:130] > # Default value is set to 'false'
	I0416 17:08:46.703084   38726 command_runner.go:130] > # disable_hostport_mapping = false
	I0416 17:08:46.703097   38726 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0416 17:08:46.703105   38726 command_runner.go:130] > #
	I0416 17:08:46.703116   38726 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0416 17:08:46.703129   38726 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0416 17:08:46.703135   38726 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0416 17:08:46.703141   38726 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0416 17:08:46.703146   38726 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0416 17:08:46.703149   38726 command_runner.go:130] > [crio.image]
	I0416 17:08:46.703155   38726 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0416 17:08:46.703159   38726 command_runner.go:130] > # default_transport = "docker://"
	I0416 17:08:46.703167   38726 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0416 17:08:46.703172   38726 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0416 17:08:46.703175   38726 command_runner.go:130] > # global_auth_file = ""
	I0416 17:08:46.703180   38726 command_runner.go:130] > # The image used to instantiate infra containers.
	I0416 17:08:46.703184   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.703189   38726 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0416 17:08:46.703198   38726 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0416 17:08:46.703207   38726 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0416 17:08:46.703214   38726 command_runner.go:130] > # This option supports live configuration reload.
	I0416 17:08:46.703221   38726 command_runner.go:130] > # pause_image_auth_file = ""
	I0416 17:08:46.703231   38726 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0416 17:08:46.703240   38726 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0416 17:08:46.703249   38726 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0416 17:08:46.703259   38726 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0416 17:08:46.703266   38726 command_runner.go:130] > # pause_command = "/pause"
	I0416 17:08:46.703274   38726 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0416 17:08:46.703283   38726 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0416 17:08:46.703292   38726 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0416 17:08:46.703301   38726 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0416 17:08:46.703309   38726 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0416 17:08:46.703318   38726 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0416 17:08:46.703330   38726 command_runner.go:130] > # pinned_images = [
	I0416 17:08:46.703336   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703342   38726 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0416 17:08:46.703348   38726 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0416 17:08:46.703356   38726 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0416 17:08:46.703367   38726 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0416 17:08:46.703375   38726 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0416 17:08:46.703382   38726 command_runner.go:130] > # signature_policy = ""
	I0416 17:08:46.703387   38726 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0416 17:08:46.703395   38726 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0416 17:08:46.703403   38726 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0416 17:08:46.703411   38726 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0416 17:08:46.703420   38726 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0416 17:08:46.703425   38726 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0416 17:08:46.703436   38726 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0416 17:08:46.703444   38726 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0416 17:08:46.703451   38726 command_runner.go:130] > # changing them here.
	I0416 17:08:46.703454   38726 command_runner.go:130] > # insecure_registries = [
	I0416 17:08:46.703458   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703466   38726 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0416 17:08:46.703474   38726 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0416 17:08:46.703478   38726 command_runner.go:130] > # image_volumes = "mkdir"
	I0416 17:08:46.703487   38726 command_runner.go:130] > # Temporary directory to use for storing big files
	I0416 17:08:46.703493   38726 command_runner.go:130] > # big_files_temporary_dir = ""
	I0416 17:08:46.703499   38726 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0416 17:08:46.703505   38726 command_runner.go:130] > # CNI plugins.
	I0416 17:08:46.703508   38726 command_runner.go:130] > [crio.network]
	I0416 17:08:46.703516   38726 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0416 17:08:46.703524   38726 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0416 17:08:46.703532   38726 command_runner.go:130] > # cni_default_network = ""
	I0416 17:08:46.703537   38726 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0416 17:08:46.703543   38726 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0416 17:08:46.703549   38726 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0416 17:08:46.703555   38726 command_runner.go:130] > # plugin_dirs = [
	I0416 17:08:46.703559   38726 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0416 17:08:46.703564   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703571   38726 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0416 17:08:46.703577   38726 command_runner.go:130] > [crio.metrics]
	I0416 17:08:46.703583   38726 command_runner.go:130] > # Globally enable or disable metrics support.
	I0416 17:08:46.703590   38726 command_runner.go:130] > enable_metrics = true
	I0416 17:08:46.703594   38726 command_runner.go:130] > # Specify enabled metrics collectors.
	I0416 17:08:46.703601   38726 command_runner.go:130] > # Per default all metrics are enabled.
	I0416 17:08:46.703609   38726 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0416 17:08:46.703617   38726 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0416 17:08:46.703625   38726 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0416 17:08:46.703629   38726 command_runner.go:130] > # metrics_collectors = [
	I0416 17:08:46.703635   38726 command_runner.go:130] > # 	"operations",
	I0416 17:08:46.703640   38726 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0416 17:08:46.703646   38726 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0416 17:08:46.703650   38726 command_runner.go:130] > # 	"operations_errors",
	I0416 17:08:46.703655   38726 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0416 17:08:46.703659   38726 command_runner.go:130] > # 	"image_pulls_by_name",
	I0416 17:08:46.703665   38726 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0416 17:08:46.703670   38726 command_runner.go:130] > # 	"image_pulls_failures",
	I0416 17:08:46.703676   38726 command_runner.go:130] > # 	"image_pulls_successes",
	I0416 17:08:46.703680   38726 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0416 17:08:46.703684   38726 command_runner.go:130] > # 	"image_layer_reuse",
	I0416 17:08:46.703689   38726 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0416 17:08:46.703697   38726 command_runner.go:130] > # 	"containers_oom_total",
	I0416 17:08:46.703703   38726 command_runner.go:130] > # 	"containers_oom",
	I0416 17:08:46.703707   38726 command_runner.go:130] > # 	"processes_defunct",
	I0416 17:08:46.703713   38726 command_runner.go:130] > # 	"operations_total",
	I0416 17:08:46.703717   38726 command_runner.go:130] > # 	"operations_latency_seconds",
	I0416 17:08:46.703724   38726 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0416 17:08:46.703728   38726 command_runner.go:130] > # 	"operations_errors_total",
	I0416 17:08:46.703734   38726 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0416 17:08:46.703740   38726 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0416 17:08:46.703747   38726 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0416 17:08:46.703751   38726 command_runner.go:130] > # 	"image_pulls_success_total",
	I0416 17:08:46.703758   38726 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0416 17:08:46.703762   38726 command_runner.go:130] > # 	"containers_oom_count_total",
	I0416 17:08:46.703768   38726 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0416 17:08:46.703773   38726 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0416 17:08:46.703779   38726 command_runner.go:130] > # ]
	I0416 17:08:46.703784   38726 command_runner.go:130] > # The port on which the metrics server will listen.
	I0416 17:08:46.703791   38726 command_runner.go:130] > # metrics_port = 9090
	I0416 17:08:46.703796   38726 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0416 17:08:46.703802   38726 command_runner.go:130] > # metrics_socket = ""
	I0416 17:08:46.703807   38726 command_runner.go:130] > # The certificate for the secure metrics server.
	I0416 17:08:46.703815   38726 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0416 17:08:46.703821   38726 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0416 17:08:46.703839   38726 command_runner.go:130] > # certificate on any modification event.
	I0416 17:08:46.703844   38726 command_runner.go:130] > # metrics_cert = ""
	I0416 17:08:46.703848   38726 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0416 17:08:46.703854   38726 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0416 17:08:46.703858   38726 command_runner.go:130] > # metrics_key = ""
	I0416 17:08:46.703865   38726 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0416 17:08:46.703869   38726 command_runner.go:130] > [crio.tracing]
	I0416 17:08:46.703875   38726 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0416 17:08:46.703879   38726 command_runner.go:130] > # enable_tracing = false
	I0416 17:08:46.703887   38726 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0416 17:08:46.703891   38726 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0416 17:08:46.703897   38726 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0416 17:08:46.703904   38726 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0416 17:08:46.703908   38726 command_runner.go:130] > # CRI-O NRI configuration.
	I0416 17:08:46.703913   38726 command_runner.go:130] > [crio.nri]
	I0416 17:08:46.703917   38726 command_runner.go:130] > # Globally enable or disable NRI.
	I0416 17:08:46.703921   38726 command_runner.go:130] > # enable_nri = false
	I0416 17:08:46.703925   38726 command_runner.go:130] > # NRI socket to listen on.
	I0416 17:08:46.703929   38726 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0416 17:08:46.703935   38726 command_runner.go:130] > # NRI plugin directory to use.
	I0416 17:08:46.703940   38726 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0416 17:08:46.703947   38726 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0416 17:08:46.703954   38726 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0416 17:08:46.703959   38726 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0416 17:08:46.703965   38726 command_runner.go:130] > # nri_disable_connections = false
	I0416 17:08:46.703970   38726 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0416 17:08:46.703978   38726 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0416 17:08:46.703984   38726 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0416 17:08:46.703990   38726 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0416 17:08:46.703996   38726 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0416 17:08:46.704002   38726 command_runner.go:130] > [crio.stats]
	I0416 17:08:46.704007   38726 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0416 17:08:46.704014   38726 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0416 17:08:46.704022   38726 command_runner.go:130] > # stats_collection_period = 0
	I0416 17:08:46.704046   38726 command_runner.go:130] ! time="2024-04-16 17:08:46.673009078Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0416 17:08:46.704064   38726 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0416 17:08:46.704203   38726 cni.go:84] Creating CNI manager for ""
	I0416 17:08:46.704217   38726 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 17:08:46.704225   38726 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:08:46.704248   38726 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-334221 NodeName:multinode-334221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:08:46.704364   38726 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-334221"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:08:46.704419   38726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:08:46.715323   38726 command_runner.go:130] > kubeadm
	I0416 17:08:46.715343   38726 command_runner.go:130] > kubectl
	I0416 17:08:46.715346   38726 command_runner.go:130] > kubelet
	I0416 17:08:46.715366   38726 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:08:46.715405   38726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:08:46.725541   38726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 17:08:46.744216   38726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:08:46.762699   38726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0416 17:08:46.781008   38726 ssh_runner.go:195] Run: grep 192.168.39.137	control-plane.minikube.internal$ /etc/hosts
	I0416 17:08:46.785249   38726 command_runner.go:130] > 192.168.39.137	control-plane.minikube.internal
	I0416 17:08:46.785432   38726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:08:46.923127   38726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:08:46.938790   38726 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221 for IP: 192.168.39.137
	I0416 17:08:46.938813   38726 certs.go:194] generating shared ca certs ...
	I0416 17:08:46.938829   38726 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:08:46.938960   38726 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:08:46.939041   38726 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:08:46.939053   38726 certs.go:256] generating profile certs ...
	I0416 17:08:46.939144   38726 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/client.key
	I0416 17:08:46.939212   38726 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.key.2ea9189c
	I0416 17:08:46.939251   38726 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.key
	I0416 17:08:46.939262   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 17:08:46.939282   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 17:08:46.939300   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 17:08:46.939316   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 17:08:46.939332   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 17:08:46.939350   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 17:08:46.939363   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 17:08:46.939381   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 17:08:46.939446   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:08:46.939487   38726 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:08:46.939501   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:08:46.939532   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:08:46.939560   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:08:46.939595   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:08:46.939646   38726 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:08:46.939690   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:46.939708   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem -> /usr/share/ca-certificates/10910.pem
	I0416 17:08:46.939723   38726 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> /usr/share/ca-certificates/109102.pem
	I0416 17:08:46.940568   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:08:46.969524   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:08:46.996252   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:08:47.024358   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:08:47.051335   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:08:47.079710   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:08:47.107996   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:08:47.138956   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/multinode-334221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:08:47.166837   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:08:47.193402   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:08:47.220791   38726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:08:47.247473   38726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:08:47.266129   38726 ssh_runner.go:195] Run: openssl version
	I0416 17:08:47.272672   38726 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 17:08:47.272754   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:08:47.284439   38726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.289392   38726 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.289593   38726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.289642   38726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:08:47.295487   38726 command_runner.go:130] > b5213941
	I0416 17:08:47.295721   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:08:47.305631   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:08:47.317239   38726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.322122   38726 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.322228   38726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.322265   38726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:08:47.328413   38726 command_runner.go:130] > 51391683
	I0416 17:08:47.328462   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:08:47.338303   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:08:47.349721   38726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.354883   38726 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.354908   38726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.354944   38726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:08:47.361146   38726 command_runner.go:130] > 3ec20f2e
	I0416 17:08:47.361192   38726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:08:47.371024   38726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:08:47.375921   38726 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:08:47.375948   38726 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0416 17:08:47.375956   38726 command_runner.go:130] > Device: 253,1	Inode: 9433606     Links: 1
	I0416 17:08:47.375966   38726 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 17:08:47.375982   38726 command_runner.go:130] > Access: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.375991   38726 command_runner.go:130] > Modify: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.375998   38726 command_runner.go:130] > Change: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.376006   38726 command_runner.go:130] >  Birth: 2024-04-16 17:02:34.010891714 +0000
	I0416 17:08:47.376053   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:08:47.381971   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.382173   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:08:47.387951   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.388220   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:08:47.393801   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.394101   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:08:47.400147   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.400201   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:08:47.405949   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.406222   38726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:08:47.412737   38726 command_runner.go:130] > Certificate will not expire
	I0416 17:08:47.412802   38726 kubeadm.go:391] StartCluster: {Name:multinode-334221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-334221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.95 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:08:47.412963   38726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:08:47.413031   38726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:08:47.457672   38726 command_runner.go:130] > 90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1
	I0416 17:08:47.457708   38726 command_runner.go:130] > ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059
	I0416 17:08:47.457714   38726 command_runner.go:130] > 8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85
	I0416 17:08:47.457720   38726 command_runner.go:130] > 1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c
	I0416 17:08:47.457726   38726 command_runner.go:130] > 2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a
	I0416 17:08:47.457732   38726 command_runner.go:130] > 842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2
	I0416 17:08:47.457737   38726 command_runner.go:130] > 37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93
	I0416 17:08:47.457762   38726 command_runner.go:130] > dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c
	I0416 17:08:47.457785   38726 cri.go:89] found id: "90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1"
	I0416 17:08:47.457794   38726 cri.go:89] found id: "ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059"
	I0416 17:08:47.457797   38726 cri.go:89] found id: "8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85"
	I0416 17:08:47.457800   38726 cri.go:89] found id: "1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c"
	I0416 17:08:47.457802   38726 cri.go:89] found id: "2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a"
	I0416 17:08:47.457805   38726 cri.go:89] found id: "842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2"
	I0416 17:08:47.457808   38726 cri.go:89] found id: "37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93"
	I0416 17:08:47.457813   38726 cri.go:89] found id: "dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c"
	I0416 17:08:47.457816   38726 cri.go:89] found id: ""
	I0416 17:08:47.457852   38726 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.711737489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713287559711714339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3b1fb5c-42dc-4b6a-88e2-66d01dfd39c7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.713351997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cc4797b-2232-430b-9cb9-4dc00f8f269f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.713724950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cc4797b-2232-430b-9cb9-4dc00f8f269f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.713392066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e48bfb63-ba7e-4cad-96ba-a2afe1d5dd2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.714474095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e48bfb63-ba7e-4cad-96ba-a2afe1d5dd2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.714522041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,PodSandboxId:25edee42625e333cda08a390c25df64a49bcb34dae2df5570a3472bc0d201242,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713287368678061445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,PodSandboxId:e728a4a327b666dc29fc5594bbd940db37ca0ee807385463632f441f1644812c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713287335195483143,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,PodSandboxId:02cb3b0ad34d5948c309b23d1568320f0b0a840ac0bec7e1659783c09fe1a11a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713287335154843422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,PodSandboxId:8da2712a9cc645c05d13c0b01b6105b5019efd9ccd6e2397b4df4f8c4c724eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713287335009106481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6-780165e0a570,},Annotations:map[string]
string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,PodSandboxId:25f600fe2fc457c62ebac85541058159428263f98e8664e1337e781b7938b8e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713287334926496273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,PodSandboxId:ca6aae62358e1fbf35d548b4673c38e2c64a5beea08f2055788bb10730f29d45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713287330116705571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,PodSandboxId:be38d4a6f7e96e5bca49fe4d4c6624519c46762b303626f52631021f70715131,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713287330176032126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,PodSandboxId:705fc0c156c3d25216b3b420ed3731560deeeb7e4f1c9c4e11e2000818d86d9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713287330025886701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string{io.kubernetes.container.hash: 5dde1468,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,PodSandboxId:3dc43f19b5247d0f42c95d4caece46d83f37680361cb0f366594ae7a9799929f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713287330040042723,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.container.hash: 683921d8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff6e20af81510222826d1f3ec91344fa5bf553f74f3af5217b80c032e66de9a,PodSandboxId:69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713287025336379052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1,PodSandboxId:792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713286979309245057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.kubernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059,PodSandboxId:e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713286979289160574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85,PodSandboxId:0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713286977332855192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c,PodSandboxId:1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713286977189160958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6
-780165e0a570,},Annotations:map[string]string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a,PodSandboxId:a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713286957910413253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string
{io.kubernetes.container.hash: 5dde1468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2,PodSandboxId:03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713286957883794942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.
container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c,PodSandboxId:258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713286957764848080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93,PodSandboxId:1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713286957790244089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 683921d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cc4797b-2232-430b-9cb9-4dc00f8f269f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.715090142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,PodSandboxId:25edee42625e333cda08a390c25df64a49bcb34dae2df5570a3472bc0d201242,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713287368678061445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,PodSandboxId:e728a4a327b666dc29fc5594bbd940db37ca0ee807385463632f441f1644812c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713287335195483143,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,PodSandboxId:02cb3b0ad34d5948c309b23d1568320f0b0a840ac0bec7e1659783c09fe1a11a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713287335154843422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,PodSandboxId:8da2712a9cc645c05d13c0b01b6105b5019efd9ccd6e2397b4df4f8c4c724eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713287335009106481,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6-780165e0a570,},Annotations:map[string]
string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,PodSandboxId:25f600fe2fc457c62ebac85541058159428263f98e8664e1337e781b7938b8e4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713287334926496273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,PodSandboxId:ca6aae62358e1fbf35d548b4673c38e2c64a5beea08f2055788bb10730f29d45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713287330116705571,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,PodSandboxId:be38d4a6f7e96e5bca49fe4d4c6624519c46762b303626f52631021f70715131,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713287330176032126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io.kub
ernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,PodSandboxId:705fc0c156c3d25216b3b420ed3731560deeeb7e4f1c9c4e11e2000818d86d9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713287330025886701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string{io.kubernetes.container.hash: 5dde1468,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,PodSandboxId:3dc43f19b5247d0f42c95d4caece46d83f37680361cb0f366594ae7a9799929f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713287330040042723,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.container.hash: 683921d8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ff6e20af81510222826d1f3ec91344fa5bf553f74f3af5217b80c032e66de9a,PodSandboxId:69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713287025336379052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90dcd274439a03f040031d668ac4d6a0d2437ffe879fb2c91738e88bfa0397a1,PodSandboxId:792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713286979309245057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.kubernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059,PodSandboxId:e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713286979289160574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85,PodSandboxId:0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713286977332855192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c,PodSandboxId:1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713286977189160958,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6
-780165e0a570,},Annotations:map[string]string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a,PodSandboxId:a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713286957910413253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string
{io.kubernetes.container.hash: 5dde1468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2,PodSandboxId:03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713286957883794942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.
container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c,PodSandboxId:258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713286957764848080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93,PodSandboxId:1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713286957790244089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 683921d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e48bfb63-ba7e-4cad-96ba-a2afe1d5dd2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.717401709Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,Verbose:false,}" file="otel-collector/interceptors.go:62" id=393d95dc-c8c0-4260-86e7-345092c5f2a3 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.717524309Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:39bb74fcbecdeb2cd9c43ef1f41754ab21c3506e179e8cdd0f266653d9eeccc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287368734357470,StartedAt:1713287368769528653,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-fn86w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bec786d6-f06c-401d-af63-69faa1ffcd84,},Annotations:map[string]string{io.kubernetes.container.hash: b241ce5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/bec786d6-f06c-401d-af63-69faa1ffcd84/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/bec786d6-f06c-401d-af63-69faa1ffcd84/containers/busybox/eaf1f632,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/bec786d6-f06c-401d-af63-69faa1ffcd84/volumes/kubernetes.io~projected/kube-api-access-c29ds,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-7fdf7869d9-fn86w_bec786d6-f06c-401d-af63-69faa1ffcd84/busybox/1.log,Resources:&C
ontainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=393d95dc-c8c0-4260-86e7-345092c5f2a3 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.718373639Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=67145657-cf56-4d5c-a04e-f90e582827be name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.718571948Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287335474839633,StartedAt:1713287335613408122,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240202-8f1494ea,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fntnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1,},Annotations:map[string]string{io.kubernetes.container.hash: c46fcdca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1/containers/kindnet-cni/709c5296,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath
:/etc/cni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1/volumes/kubernetes.io~projected/kube-api-access-tbjr7,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-fntnd_8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1/kindnet-cni/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=67145657-cf56-4d5c-a04e-f90e58
2827be name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.719391626Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8e383e27-75e8-4f62-87cb-df32f4fc9a1e name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.720273395Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287335282704163,StartedAt:1713287335363072378,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kmmn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de04df6b-6ad2-4417-94fd-1d8bb97b864a,},Annotations:map[string]string{io.kubernetes.container.hash: 10a68e65,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/de04df6b-6ad2-4417-94fd-1d8bb97b864a/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/de04df6b-6ad2-4417-94fd-1d8bb97b864a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/de04df6b-6ad2-4417-94fd-1d8bb97b864a/containers/coredns/36612de4,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/de04df6b-6ad2-4417-94fd-1d8bb97b864a/volumes/kubernetes.io~projected/kube-api-access-mx7ds,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-76f75df574-kmmn4_de04df6b-6ad2-4417-94fd-1d8bb97b864a/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8e383e27-75e8-4f62-87cb-df32f4fc9a1e name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.721341233Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b6004743-41e5-41d4-a655-aba086533bc7 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.721437113Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287335093220603,StartedAt:1713287335164174557,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jjc8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe0e05-fb6a-4fe3-8eb6-780165e0a570,},Annotations:map[string]string{io.kubernetes.container.hash: 945b1316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/90fe0e05-fb6a-4fe3-8eb6-780165e0a570/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/90fe0e05-fb6a-4fe3-8eb6-780165e0a570/containers/kube-proxy/73fccb25,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/
lib/kubelet/pods/90fe0e05-fb6a-4fe3-8eb6-780165e0a570/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/90fe0e05-fb6a-4fe3-8eb6-780165e0a570/volumes/kubernetes.io~projected/kube-api-access-hblh5,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-jjc8v_90fe0e05-fb6a-4fe3-8eb6-780165e0a570/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-c
ollector/interceptors.go:74" id=b6004743-41e5-41d4-a655-aba086533bc7 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.722075043Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b8ee8cf6-fc56-4448-87c2-9d85ab80217e name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.722172026Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:33b39d13c5d882d074f0e21027230fa622acae265826ae92f3cfedbdeddba0a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287335040509213,StartedAt:1713287335151919944,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dd215e8-2408-4dd5-971e-984ba5364a2b,},Annotations:map[string]string{io.kubernetes.container.hash: cd2c2013,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5dd215e8-2408-4dd5-971e-984ba5364a2b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5dd215e8-2408-4dd5-971e-984ba5364a2b/containers/storage-provisioner/f642fe5f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5dd215e8-2408-4dd5-971e-984ba5364a2b/volumes/kubernetes.io~projected/kube-api-access-wrt48,Readonly:true,SelinuxRelabel:fals
e,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_5dd215e8-2408-4dd5-971e-984ba5364a2b/storage-provisioner/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b8ee8cf6-fc56-4448-87c2-9d85ab80217e name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.722665977Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,Verbose:false,}" file="otel-collector/interceptors.go:62" id=05d11486-be8d-41fb-aa9f-8cf0a051a7a4 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.722751768Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287330272388790,StartedAt:1713287330348485761,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 120c3e394989b4d3ebee3b461ba74f97,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/120c3e394989b4d3ebee3b461ba74f97/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/120c3e394989b4d3ebee3b461ba74f97/containers/kube-scheduler/17239b7a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-multinode-334221_120c3e394989b4d3ebee3b461ba74f97/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeri
od:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=05d11486-be8d-41fb-aa9f-8cf0a051a7a4 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.723420619Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f8a6f84a-c3e1-45b2-866c-9035dab7e7fe name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.723521434Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287330258023046,StartedAt:1713287330442221130,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d052fc5203f79937ba06a7a4a172dee,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6d052fc5203f79937ba06a7a4a172dee/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6d052fc5203f79937ba06a7a4a172dee/containers/kube-controller-manager/0273f5ae,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,
UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-multinode-334221_6d052fc5203f79937ba06a7a4a172dee/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMem
s:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f8a6f84a-c3e1-45b2-866c-9035dab7e7fe name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.724018338Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a4233a3a-eb76-46e6-a7c0-3def0c2dec8e name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.724114550Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287330141347690,StartedAt:1713287330290707429,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7a5a5dc6e39c6c525ff7d9719f9ca00,},Annotations:map[string]string{io.kubernetes.container.hash: 5dde1468,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a7a5a5dc6e39c6c525ff7d9719f9ca00/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a7a5a5dc6e39c6c525ff7d9719f9ca00/containers/etcd/921e589e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-m
ultinode-334221_a7a5a5dc6e39c6c525ff7d9719f9ca00/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a4233a3a-eb76-46e6-a7c0-3def0c2dec8e name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.724680576Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a37cdf42-86ef-4d90-8844-3fd4ef10555c name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:12:39 multinode-334221 crio[2849]: time="2024-04-16 17:12:39.724769889Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713287330129663465,StartedAt:1713287330259701239,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967de72eba21f1ee9f74d3a0d8fc1538,},Annotations:map[string]string{io.kubernetes.container.hash: 683921d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/967de72eba21f1ee9f74d3a0d8fc1538/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/967de72eba21f1ee9f74d3a0d8fc1538/containers/kube-apiserver/eefa7e9f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Containe
rPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-multinode-334221_967de72eba21f1ee9f74d3a0d8fc1538/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a37cdf42-86ef-4d90-8844-3fd4ef10555c name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39bb74fcbecde       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   25edee42625e3       busybox-7fdf7869d9-fn86w
	918a75a3ca792       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   e728a4a327b66       kindnet-fntnd
	0d96295ea9684       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   02cb3b0ad34d5       coredns-76f75df574-kmmn4
	ff8b0d5f33be5       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                1                   8da2712a9cc64       kube-proxy-jjc8v
	33b39d13c5d88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   25f600fe2fc45       storage-provisioner
	22266da17977f       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   1                   be38d4a6f7e96       kube-controller-manager-multinode-334221
	677a87ab6b202       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            1                   ca6aae62358e1       kube-scheduler-multinode-334221
	47be8c7e2330a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            1                   3dc43f19b5247       kube-apiserver-multinode-334221
	257ea8618977c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   705fc0c156c3d       etcd-multinode-334221
	6ff6e20af8151       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   69f4c1c5a6a7b       busybox-7fdf7869d9-fn86w
	90dcd274439a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   792dfcb8e32e6       storage-provisioner
	ec151ba6a42a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   e795d10063d9e       coredns-76f75df574-kmmn4
	8d106f52934dc       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   0b7212d0b852b       kindnet-fntnd
	1ad0500c2ca8e       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      9 minutes ago       Exited              kube-proxy                0                   1daef1766a0ea       kube-proxy-jjc8v
	2a739b90a41d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   a403e706ad902       etcd-multinode-334221
	842b6569b6e08       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      10 minutes ago      Exited              kube-scheduler            0                   03f5937495793       kube-scheduler-multinode-334221
	37623592e737d       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      10 minutes ago      Exited              kube-apiserver            0                   1f07ad5930705       kube-apiserver-multinode-334221
	dffaed579f047       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      10 minutes ago      Exited              kube-controller-manager   0                   258d7e84b6f54       kube-controller-manager-multinode-334221
	
	
	==> coredns [0d96295ea9684c685fe501ff01b53465e3e6322018fc55fc26732983ead1faf1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48533 - 2317 "HINFO IN 8745856005267822946.1325250241756429142. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.084486017s
	
	
	==> coredns [ec151ba6a42a22383a5b3731a8a87ac77d0a7569f3edfc77d2d767bd83c01059] <==
	[INFO] 10.244.1.2:54242 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001686432s
	[INFO] 10.244.1.2:52356 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154252s
	[INFO] 10.244.1.2:53372 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135816s
	[INFO] 10.244.1.2:57180 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001264825s
	[INFO] 10.244.1.2:34404 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007883s
	[INFO] 10.244.1.2:35137 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158335s
	[INFO] 10.244.1.2:47370 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098421s
	[INFO] 10.244.0.3:37968 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156843s
	[INFO] 10.244.0.3:39972 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092463s
	[INFO] 10.244.0.3:40885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076822s
	[INFO] 10.244.0.3:35714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071218s
	[INFO] 10.244.1.2:55015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187475s
	[INFO] 10.244.1.2:44135 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098668s
	[INFO] 10.244.1.2:44160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118175s
	[INFO] 10.244.1.2:53055 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152982s
	[INFO] 10.244.0.3:50792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109758s
	[INFO] 10.244.0.3:56375 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215576s
	[INFO] 10.244.0.3:53832 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079611s
	[INFO] 10.244.0.3:58674 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099832s
	[INFO] 10.244.1.2:42759 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172484s
	[INFO] 10.244.1.2:32992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000273696s
	[INFO] 10.244.1.2:41132 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113035s
	[INFO] 10.244.1.2:55606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115315s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-334221
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334221
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-334221
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_02_44_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:02:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334221
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:12:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:08:53 +0000   Tue, 16 Apr 2024 17:02:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    multinode-334221
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5c158eac4584c888e6ef2b0e52007a0
	  System UUID:                c5c158ea-c458-4c88-8e6e-f2b0e52007a0
	  Boot ID:                    55202679-9eef-45ab-97dd-0197453c8d95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-fn86w                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 coredns-76f75df574-kmmn4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m44s
	  kube-system                 etcd-multinode-334221                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m56s
	  kube-system                 kindnet-fntnd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m44s
	  kube-system                 kube-apiserver-multinode-334221             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-controller-manager-multinode-334221    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-proxy-jjc8v                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 kube-scheduler-multinode-334221             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m42s                  kube-proxy       
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-334221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-334221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-334221 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m56s                  kubelet          Node multinode-334221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m56s                  kubelet          Node multinode-334221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     9m56s                  kubelet          Node multinode-334221 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m45s                  node-controller  Node multinode-334221 event: Registered Node multinode-334221 in Controller
	  Normal  NodeReady                9m42s                  kubelet          Node multinode-334221 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-334221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-334221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-334221 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m34s                  node-controller  Node multinode-334221 event: Registered Node multinode-334221 in Controller
	
	
	Name:               multinode-334221-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334221-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=multinode-334221
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T17_09_37_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:09:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334221-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:10:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:11:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:11:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:11:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 16 Apr 2024 17:10:07 +0000   Tue, 16 Apr 2024 17:11:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-334221-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0675febc0c45430ea1c6abae45425fcc
	  System UUID:                0675febc-0c45-430e-a1c6-abae45425fcc
	  Boot ID:                    a243c7c7-9811-4f3c-bee7-4fcaacac818f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-d5wzc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-xfr28               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m7s
	  kube-system                 kube-proxy-24lft            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m1s                 kube-proxy       
	  Normal  Starting                 2m58s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m7s (x2 over 9m7s)  kubelet          Node multinode-334221-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m7s (x2 over 9m7s)  kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m7s (x2 over 9m7s)  kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                8m59s                kubelet          Node multinode-334221-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node multinode-334221-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node multinode-334221-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node multinode-334221-m02 event: Registered Node multinode-334221-m02 in Controller
	  Normal  NodeReady                2m56s                kubelet          Node multinode-334221-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                  node-controller  Node multinode-334221-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060370] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071560] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.176929] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140075] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.288128] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.976125] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.067093] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.357730] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.720073] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.587857] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.092944] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.702743] systemd-fstab-generator[1462]: Ignoring "noauto" option for root device
	[  +0.153033] kauditd_printk_skb: 21 callbacks suppressed
	[Apr16 17:03] kauditd_printk_skb: 82 callbacks suppressed
	[Apr16 17:08] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.169198] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +0.176428] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.158939] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.294153] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.741104] systemd-fstab-generator[2937]: Ignoring "noauto" option for root device
	[  +2.238997] systemd-fstab-generator[3064]: Ignoring "noauto" option for root device
	[  +5.731424] kauditd_printk_skb: 184 callbacks suppressed
	[Apr16 17:09] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.444050] systemd-fstab-generator[3882]: Ignoring "noauto" option for root device
	[ +17.875973] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [257ea8618977c8bb2744321c830e459a7f52c7d652e71b7f91f3af664a4d3cc8] <==
	{"level":"info","ts":"2024-04-16T17:08:50.572383Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5527995f6263874a","initial-advertise-peer-urls":["https://192.168.39.137:2380"],"listen-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:08:50.57244Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:08:50.572549Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:08:50.572581Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:08:50.573669Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:08:50.573744Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:08:50.573755Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:08:50.57393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a switched to configuration voters=(6136041652267222858)"}
	{"level":"info","ts":"2024-04-16T17:08:50.57414Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","added-peer-id":"5527995f6263874a","added-peer-peer-urls":["https://192.168.39.137:2380"]}
	{"level":"info","ts":"2024-04-16T17:08:50.574294Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8623b2a8b011233f","local-member-id":"5527995f6263874a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:08:50.574351Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:08:52.030733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T17:08:52.030806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:08:52.030848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 2"}
	{"level":"info","ts":"2024-04-16T17:08:52.030862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.030888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.030897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became leader at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.030908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5527995f6263874a elected leader 5527995f6263874a at term 3"}
	{"level":"info","ts":"2024-04-16T17:08:52.037189Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:08:52.038103Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5527995f6263874a","local-member-attributes":"{Name:multinode-334221 ClientURLs:[https://192.168.39.137:2379]}","request-path":"/0/members/5527995f6263874a/attributes","cluster-id":"8623b2a8b011233f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:08:52.038356Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:08:52.038653Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:08:52.038694Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:08:52.039293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.137:2379"}
	{"level":"info","ts":"2024-04-16T17:08:52.040475Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [2a739b90a41d947256033d3789fee6a5096ef8c58a880cbed9fbffd112a5ce2a] <==
	{"level":"info","ts":"2024-04-16T17:02:38.532033Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:02:38.532164Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:03:33.371202Z","caller":"traceutil/trace.go:171","msg":"trace[1176074096] linearizableReadLoop","detail":"{readStateIndex:492; appliedIndex:491; }","duration":"170.613839ms","start":"2024-04-16T17:03:33.200551Z","end":"2024-04-16T17:03:33.371165Z","steps":["trace[1176074096] 'read index received'  (duration: 166.533636ms)","trace[1176074096] 'applied index is now lower than readState.Index'  (duration: 4.079578ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:03:33.371916Z","caller":"traceutil/trace.go:171","msg":"trace[61202478] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"181.462307ms","start":"2024-04-16T17:03:33.190443Z","end":"2024-04-16T17:03:33.371905Z","steps":["trace[61202478] 'process raft request'  (duration: 176.737661ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:03:33.37212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.515336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-334221-m02.17c6d176062cc839\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2024-04-16T17:03:33.372883Z","caller":"traceutil/trace.go:171","msg":"trace[1179955265] range","detail":"{range_begin:/registry/events/default/multinode-334221-m02.17c6d176062cc839; range_end:; response_count:1; response_revision:472; }","duration":"172.306584ms","start":"2024-04-16T17:03:33.200527Z","end":"2024-04-16T17:03:33.372834Z","steps":["trace[1179955265] 'agreement among raft nodes before linearized reading'  (duration: 171.466761ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:03:35.885061Z","caller":"traceutil/trace.go:171","msg":"trace[1235357056] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:520; }","duration":"156.41988ms","start":"2024-04-16T17:03:35.728623Z","end":"2024-04-16T17:03:35.885043Z","steps":["trace[1235357056] 'read index received'  (duration: 142.917864ms)","trace[1235357056] 'applied index is now lower than readState.Index'  (duration: 13.501125ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:03:35.885217Z","caller":"traceutil/trace.go:171","msg":"trace[140332580] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"171.690877ms","start":"2024-04-16T17:03:35.713518Z","end":"2024-04-16T17:03:35.885209Z","steps":["trace[140332580] 'process raft request'  (duration: 158.070586ms)","trace[140332580] 'compare'  (duration: 13.2934ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:03:35.885352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.716267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-04-16T17:03:35.88541Z","caller":"traceutil/trace.go:171","msg":"trace[2098667624] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:500; }","duration":"156.802396ms","start":"2024-04-16T17:03:35.7286Z","end":"2024-04-16T17:03:35.885403Z","steps":["trace[2098667624] 'agreement among raft nodes before linearized reading'  (duration: 156.713584ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:03:35.885437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.319487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334221-m02\" ","response":"range_response_count:1 size:2959"}
	{"level":"info","ts":"2024-04-16T17:03:35.885504Z","caller":"traceutil/trace.go:171","msg":"trace[1138375139] range","detail":"{range_begin:/registry/minions/multinode-334221-m02; range_end:; response_count:1; response_revision:500; }","duration":"139.408533ms","start":"2024-04-16T17:03:35.746086Z","end":"2024-04-16T17:03:35.885494Z","steps":["trace[1138375139] 'agreement among raft nodes before linearized reading'  (duration: 139.32103ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:04:18.898486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.070567ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9748761469881449988 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-334221-m03.17c6d1809f6d8af6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-334221-m03.17c6d1809f6d8af6\" value_size:642 lease:525389433026673887 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-16T17:04:18.898753Z","caller":"traceutil/trace.go:171","msg":"trace[1195658590] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"238.56281ms","start":"2024-04-16T17:04:18.660163Z","end":"2024-04-16T17:04:18.898725Z","steps":["trace[1195658590] 'process raft request'  (duration: 121.329744ms)","trace[1195658590] 'compare'  (duration: 115.739674ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:04:18.899667Z","caller":"traceutil/trace.go:171","msg":"trace[1245388816] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"173.130052ms","start":"2024-04-16T17:04:18.726524Z","end":"2024-04-16T17:04:18.899654Z","steps":["trace[1245388816] 'process raft request'  (duration: 172.700374ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:07:13.808177Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-16T17:07:13.808318Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-334221","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	{"level":"warn","ts":"2024-04-16T17:07:13.808457Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T17:07:13.808613Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T17:07:13.902665Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T17:07:13.90273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T17:07:13.902806Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5527995f6263874a","current-leader-member-id":"5527995f6263874a"}
	{"level":"info","ts":"2024-04-16T17:07:13.905422Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:07:13.905566Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2024-04-16T17:07:13.905578Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-334221","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	
	
	==> kernel <==
	 17:12:40 up 10 min,  0 users,  load average: 0.48, 0.49, 0.29
	Linux multinode-334221 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8d106f52934dcdfa80a76dc47a6880908fa0beea83db7ad926fd41e6440bba85] <==
	I0416 17:06:28.520150       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:06:38.529845       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:06:38.529896       1 main.go:227] handling current node
	I0416 17:06:38.529907       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:06:38.529913       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:06:38.530078       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:06:38.530110       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:06:48.537345       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:06:48.537449       1 main.go:227] handling current node
	I0416 17:06:48.537595       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:06:48.537627       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:06:48.537794       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:06:48.537831       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:06:58.552061       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:06:58.552207       1 main.go:227] handling current node
	I0416 17:06:58.552242       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:06:58.552263       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:06:58.552415       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:06:58.552436       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	I0416 17:07:08.563223       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:07:08.563382       1 main.go:227] handling current node
	I0416 17:07:08.563406       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:07:08.563425       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:07:08.563549       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0416 17:07:08.563573       1 main.go:250] Node multinode-334221-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [918a75a3ca792c7899e6cca1291a2206a6d16a56a87c8a96282d1e50ed30ff6c] <==
	I0416 17:11:36.375022       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:11:46.383063       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:11:46.383114       1 main.go:227] handling current node
	I0416 17:11:46.383125       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:11:46.383131       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:11:56.396472       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:11:56.396490       1 main.go:227] handling current node
	I0416 17:11:56.396499       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:11:56.396504       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:12:06.401427       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:12:06.401488       1 main.go:227] handling current node
	I0416 17:12:06.401521       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:12:06.401528       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:12:16.414764       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:12:16.414817       1 main.go:227] handling current node
	I0416 17:12:16.414828       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:12:16.414834       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:12:26.428997       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:12:26.429132       1 main.go:227] handling current node
	I0416 17:12:26.429222       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:12:26.429250       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	I0416 17:12:36.437428       1 main.go:223] Handling node with IPs: map[192.168.39.137:{}]
	I0416 17:12:36.437480       1 main.go:227] handling current node
	I0416 17:12:36.437491       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0416 17:12:36.437502       1 main.go:250] Node multinode-334221-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [37623592e737d93494e9d51485d1ed9593cedf3506372056dcefa10e1cc5aa93] <==
	I0416 17:02:41.496352       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0416 17:02:41.496473       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:02:42.175811       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:02:42.250450       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:02:42.297812       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0416 17:02:42.304885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137]
	I0416 17:02:42.306185       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:02:42.311415       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:02:42.569396       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:02:43.916560       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:02:43.948692       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0416 17:02:43.968018       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:02:56.263061       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0416 17:02:56.619165       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	W0416 17:07:13.810762       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831716       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831830       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831874       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.831917       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.842260       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.843541       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.844370       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0416 17:07:13.848664       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0416 17:07:13.849103       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:07:13.852919       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [47be8c7e2330a579f45517e2f304bb5b885470924fc23e7e29ecb85b75ddec9b] <==
	I0416 17:08:53.401683       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 17:08:53.416563       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 17:08:53.416615       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 17:08:53.518221       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 17:08:53.530066       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:08:53.531226       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 17:08:53.531309       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:08:53.531333       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:08:53.531505       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:08:53.531536       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:08:53.531671       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:08:53.531713       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:08:53.563533       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:08:53.589635       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:08:53.598254       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 17:08:53.608216       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0416 17:08:53.639726       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0416 17:08:54.404770       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:08:55.930375       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:08:56.060547       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:08:56.073794       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:08:56.145628       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:08:56.152339       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:09:06.275453       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 17:09:06.312234       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [22266da17977fac28225a62a2a2f2f7054ba50fe78a10ae5d071022f545acecc] <==
	I0416 17:09:38.608680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.6µs"
	I0416 17:09:41.368880       1 event.go:376] "Event occurred" object="multinode-334221-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-334221-m02 event: Registered Node multinode-334221-m02 in Controller"
	I0416 17:09:44.237877       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:09:44.270664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.633µs"
	I0416 17:09:44.285128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.281µs"
	I0416 17:09:46.024386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="9.737465ms"
	I0416 17:09:46.025228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.962µs"
	I0416 17:09:46.381063       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-d5wzc" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-d5wzc"
	I0416 17:10:03.736456       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:10:04.981580       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334221-m03\" does not exist"
	I0416 17:10:04.982035       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:10:04.998756       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-334221-m03" podCIDRs=["10.244.2.0/24"]
	I0416 17:10:12.483125       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m03"
	I0416 17:10:18.185161       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:10:21.403161       1 event.go:376] "Event occurred" object="multinode-334221-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-334221-m03 event: Removing Node multinode-334221-m03 from Controller"
	I0416 17:11:01.421807       1 event.go:376] "Event occurred" object="multinode-334221-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-334221-m02 status is now: NodeNotReady"
	I0416 17:11:01.436883       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-d5wzc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:11:01.454574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="18.956963ms"
	I0416 17:11:01.454791       1 event.go:376] "Event occurred" object="kube-system/kindnet-xfr28" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:11:01.455674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.228µs"
	I0416 17:11:01.473928       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-24lft" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:11:06.245395       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-2q8wk"
	I0416 17:11:06.272741       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-2q8wk"
	I0416 17:11:06.272818       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-xtm5h"
	I0416 17:11:06.292796       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-xtm5h"
	
	
	==> kube-controller-manager [dffaed579f04740d194061be2b53bb538f8f9eed80633816a715b89481cb131c] <==
	I0416 17:03:46.461844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="15.202478ms"
	I0416 17:03:46.462114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.059µs"
	I0416 17:04:18.905550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:04:18.907819       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334221-m03\" does not exist"
	I0416 17:04:18.939319       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-334221-m03" podCIDRs=["10.244.2.0/24"]
	I0416 17:04:18.943604       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xtm5h"
	I0416 17:04:18.946321       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2q8wk"
	I0416 17:04:20.652160       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-334221-m03"
	I0416 17:04:20.652488       1 event.go:376] "Event occurred" object="multinode-334221-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-334221-m03 event: Registered Node multinode-334221-m03 in Controller"
	I0416 17:04:27.499657       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:04:58.984682       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:05:00.043769       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334221-m03\" does not exist"
	I0416 17:05:00.044541       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:05:00.055267       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-334221-m03" podCIDRs=["10.244.3.0/24"]
	I0416 17:05:07.243402       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m02"
	I0416 17:05:45.739189       1 event.go:376] "Event occurred" object="multinode-334221-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-334221-m02 status is now: NodeNotReady"
	I0416 17:05:45.740871       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-334221-m03"
	I0416 17:05:45.759169       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-24lft" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:45.771461       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-tzz4s" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:45.782505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.18897ms"
	I0416 17:05:45.783766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.486µs"
	I0416 17:05:45.788640       1 event.go:376] "Event occurred" object="kube-system/kindnet-xfr28" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:50.796098       1 event.go:376] "Event occurred" object="multinode-334221-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-334221-m03 status is now: NodeNotReady"
	I0416 17:05:50.810156       1 event.go:376] "Event occurred" object="kube-system/kindnet-2q8wk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 17:05:50.824108       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-xtm5h" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [1ad0500c2ca8e29e1b8745da107ca6fbf183b5664f60bc6570e029bdaee26a5c] <==
	I0416 17:02:57.652233       1 server_others.go:72] "Using iptables proxy"
	I0416 17:02:57.674916       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	I0416 17:02:57.783086       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:02:57.783250       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:02:57.783572       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:02:57.794704       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:02:57.794936       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:02:57.795081       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:02:57.799851       1 config.go:188] "Starting service config controller"
	I0416 17:02:57.801438       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:02:57.801506       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:02:57.801531       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:02:57.800894       1 config.go:315] "Starting node config controller"
	I0416 17:02:57.801897       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:02:57.902071       1 shared_informer.go:318] Caches are synced for node config
	I0416 17:02:57.902150       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:02:57.902160       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ff8b0d5f33be5d0c388b6420a8c4001940345d5299a6a77da9b0dc7c620d5008] <==
	I0416 17:08:55.257219       1 server_others.go:72] "Using iptables proxy"
	I0416 17:08:55.281513       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	I0416 17:08:55.413279       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:08:55.413303       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:08:55.413320       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:08:55.429202       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:08:55.429413       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:08:55.429424       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:08:55.436129       1 config.go:188] "Starting service config controller"
	I0416 17:08:55.437088       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:08:55.437244       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:08:55.437254       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:08:55.440895       1 config.go:315] "Starting node config controller"
	I0416 17:08:55.441900       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:08:55.538602       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:08:55.538661       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:08:55.542386       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [677a87ab6b202ed47e6e4484709b9626166fabdcb171b69dcd26773a3385afa5] <==
	I0416 17:08:51.274383       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:08:53.482069       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 17:08:53.482155       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:08:53.482183       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:08:53.482208       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:08:53.562403       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 17:08:53.562527       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:08:53.570658       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:08:53.570786       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:08:53.582273       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 17:08:53.582357       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:08:53.671248       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [842b6569b6e088b911a616198c1184f02a0c489489c785005b6036a6286de6e2] <==
	W0416 17:02:41.711674       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:02:41.711734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:02:41.717203       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:02:41.717765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:02:41.760254       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:02:41.760395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:02:41.769170       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:02:41.769294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:02:41.799219       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:02:41.799278       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:02:41.807095       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:02:41.807148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:02:41.817827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 17:02:41.817999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 17:02:41.829134       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:02:41.829246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:02:41.872818       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:02:41.872843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:02:41.882682       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:02:41.882735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0416 17:02:43.882425       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:07:13.835371       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 17:07:13.835469       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 17:07:13.835827       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 17:07:13.836249       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 16 17:10:49 multinode-334221 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:10:49 multinode-334221 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.426426    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod120c3e394989b4d3ebee3b461ba74f97/crio-03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8: Error finding container 03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8: Status 404 returned error can't find the container with id 03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.426877    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/podde04df6b-6ad2-4417-94fd-1d8bb97b864a/crio-e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332: Error finding container e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332: Status 404 returned error can't find the container with id e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.427126    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod967de72eba21f1ee9f74d3a0d8fc1538/crio-1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f: Error finding container 1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f: Status 404 returned error can't find the container with id 1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.427303    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod5dd215e8-2408-4dd5-971e-984ba5364a2b/crio-792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003: Error finding container 792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003: Status 404 returned error can't find the container with id 792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.427437    3071 manager.go:1116] Failed to create existing container: /kubepods/pod8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1/crio-0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd: Error finding container 0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd: Status 404 returned error can't find the container with id 0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.427571    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podbec786d6-f06c-401d-af63-69faa1ffcd84/crio-69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31: Error finding container 69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31: Status 404 returned error can't find the container with id 69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.427798    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod90fe0e05-fb6a-4fe3-8eb6-780165e0a570/crio-1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853: Error finding container 1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853: Status 404 returned error can't find the container with id 1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.428044    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda7a5a5dc6e39c6c525ff7d9719f9ca00/crio-a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7: Error finding container a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7: Status 404 returned error can't find the container with id a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7
	Apr 16 17:10:49 multinode-334221 kubelet[3071]: E0416 17:10:49.428180    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d052fc5203f79937ba06a7a4a172dee/crio-258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639: Error finding container 258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639: Status 404 returned error can't find the container with id 258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.393020    3071 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:11:49 multinode-334221 kubelet[3071]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:11:49 multinode-334221 kubelet[3071]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:11:49 multinode-334221 kubelet[3071]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:11:49 multinode-334221 kubelet[3071]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.425988    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda7a5a5dc6e39c6c525ff7d9719f9ca00/crio-a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7: Error finding container a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7: Status 404 returned error can't find the container with id a403e706ad90275acb8134912ea58bbcc7cba8a79906dbe1ec4d6f3366bc01c7
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.426331    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod90fe0e05-fb6a-4fe3-8eb6-780165e0a570/crio-1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853: Error finding container 1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853: Status 404 returned error can't find the container with id 1daef1766a0ead8624f367dee5fbf208d85489e81a409849bfab12cad4e03853
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.426728    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6d052fc5203f79937ba06a7a4a172dee/crio-258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639: Error finding container 258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639: Status 404 returned error can't find the container with id 258d7e84b6f54492082b28c02bb553a89947d3178431fd4f6a69e352426a1639
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.427561    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod5dd215e8-2408-4dd5-971e-984ba5364a2b/crio-792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003: Error finding container 792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003: Status 404 returned error can't find the container with id 792dfcb8e32e68ed5bb4f36d8717de44b41db510904fed2e8a6f23db6e4ce003
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.430322    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/podde04df6b-6ad2-4417-94fd-1d8bb97b864a/crio-e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332: Error finding container e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332: Status 404 returned error can't find the container with id e795d10063d9ef900442249df4d0c538ae7c6b8fb717dc5db4ec46733ee21332
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.430586    3071 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podbec786d6-f06c-401d-af63-69faa1ffcd84/crio-69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31: Error finding container 69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31: Status 404 returned error can't find the container with id 69f4c1c5a6a7b83c5a7b4aa7a80bc927e4d16cd0532a596a5e302538feda6c31
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.431076    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod967de72eba21f1ee9f74d3a0d8fc1538/crio-1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f: Error finding container 1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f: Status 404 returned error can't find the container with id 1f07ad5930705454e9d0214ed41354a4fd6b99f51377f9ec68be2906cdd43f1f
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.432107    3071 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod120c3e394989b4d3ebee3b461ba74f97/crio-03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8: Error finding container 03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8: Status 404 returned error can't find the container with id 03f5937495793c5c205f3e127a144437038d6d9d5273b83de858f374362bdbc8
	Apr 16 17:11:49 multinode-334221 kubelet[3071]: E0416 17:11:49.432307    3071 manager.go:1116] Failed to create existing container: /kubepods/pod8eb2b4b6-6c15-443c-bd5e-80d4389ce8a1/crio-0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd: Error finding container 0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd: Status 404 returned error can't find the container with id 0b7212d0b852b706ac372c4c3b49f10ad20871bd838933ab6d23a56f03be08dd
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:12:39.234478   40669 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18649-3628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-334221 -n multinode-334221
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-334221 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.70s)

                                                
                                    
x
+
TestPreload (277.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-891736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0416 17:17:03.890128   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:17:10.029880   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-891736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m16.862485897s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-891736 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-891736 image pull gcr.io/k8s-minikube/busybox: (1.094506303s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-891736
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-891736: exit status 82 (2m0.49046972s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-891736"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-891736 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-16 17:20:37.718707406 +0000 UTC m=+3673.117383939
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-891736 -n test-preload-891736
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-891736 -n test-preload-891736: exit status 3 (18.574905218s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:20:56.289178   43998 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.119:22: connect: no route to host
	E0416 17:20:56.289198   43998 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.119:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-891736" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-891736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-891736
--- FAIL: TestPreload (277.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (467.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.906127354s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-633875] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-633875" primary control-plane node in "kubernetes-upgrade-633875" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:31:06.289679   53352 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:31:06.289833   53352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:31:06.289844   53352 out.go:304] Setting ErrFile to fd 2...
	I0416 17:31:06.289851   53352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:31:06.290049   53352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:31:06.290639   53352 out.go:298] Setting JSON to false
	I0416 17:31:06.291616   53352 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4418,"bootTime":1713284248,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:31:06.291679   53352 start.go:139] virtualization: kvm guest
	I0416 17:31:06.294013   53352 out.go:177] * [kubernetes-upgrade-633875] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:31:06.295334   53352 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:31:06.295336   53352 notify.go:220] Checking for updates...
	I0416 17:31:06.296762   53352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:31:06.298308   53352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:31:06.299586   53352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:31:06.300828   53352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:31:06.302161   53352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:31:06.303818   53352 config.go:182] Loaded profile config "embed-certs-512869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:31:06.303972   53352 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:31:06.304085   53352 config.go:182] Loaded profile config "old-k8s-version-795352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 17:31:06.304179   53352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:31:06.338201   53352 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:31:06.339533   53352 start.go:297] selected driver: kvm2
	I0416 17:31:06.339547   53352 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:31:06.339560   53352 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:31:06.340197   53352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:31:06.340267   53352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:31:06.354695   53352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:31:06.354743   53352 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:31:06.354956   53352 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 17:31:06.355022   53352 cni.go:84] Creating CNI manager for ""
	I0416 17:31:06.355037   53352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:31:06.355046   53352 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 17:31:06.355111   53352 start.go:340] cluster config:
	{Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:31:06.355224   53352 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:31:06.357001   53352 out.go:177] * Starting "kubernetes-upgrade-633875" primary control-plane node in "kubernetes-upgrade-633875" cluster
	I0416 17:31:06.358288   53352 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 17:31:06.358321   53352 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 17:31:06.358329   53352 cache.go:56] Caching tarball of preloaded images
	I0416 17:31:06.358412   53352 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:31:06.358426   53352 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 17:31:06.358536   53352 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/config.json ...
	I0416 17:31:06.358560   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/config.json: {Name:mk58a516462c9ded9c48816487592b322a142168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:06.358715   53352 start.go:360] acquireMachinesLock for kubernetes-upgrade-633875: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:31:06.358756   53352 start.go:364] duration metric: took 24.531µs to acquireMachinesLock for "kubernetes-upgrade-633875"
	I0416 17:31:06.358778   53352 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:31:06.358847   53352 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 17:31:06.360448   53352 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:31:06.360569   53352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:31:06.360601   53352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:31:06.374068   53352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43281
	I0416 17:31:06.374509   53352 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:31:06.375053   53352 main.go:141] libmachine: Using API Version  1
	I0416 17:31:06.375073   53352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:31:06.375478   53352 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:31:06.375672   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:31:06.375850   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:06.375993   53352 start.go:159] libmachine.API.Create for "kubernetes-upgrade-633875" (driver="kvm2")
	I0416 17:31:06.376023   53352 client.go:168] LocalClient.Create starting
	I0416 17:31:06.376062   53352 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 17:31:06.376101   53352 main.go:141] libmachine: Decoding PEM data...
	I0416 17:31:06.376125   53352 main.go:141] libmachine: Parsing certificate...
	I0416 17:31:06.376200   53352 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 17:31:06.376228   53352 main.go:141] libmachine: Decoding PEM data...
	I0416 17:31:06.376246   53352 main.go:141] libmachine: Parsing certificate...
	I0416 17:31:06.376271   53352 main.go:141] libmachine: Running pre-create checks...
	I0416 17:31:06.376294   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .PreCreateCheck
	I0416 17:31:06.376632   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetConfigRaw
	I0416 17:31:06.376996   53352 main.go:141] libmachine: Creating machine...
	I0416 17:31:06.377011   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .Create
	I0416 17:31:06.377151   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Creating KVM machine...
	I0416 17:31:06.378363   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found existing default KVM network
	I0416 17:31:06.379751   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:06.379629   53375 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015920}
	I0416 17:31:06.379779   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | created network xml: 
	I0416 17:31:06.379786   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | <network>
	I0416 17:31:06.379792   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |   <name>mk-kubernetes-upgrade-633875</name>
	I0416 17:31:06.379798   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |   <dns enable='no'/>
	I0416 17:31:06.379803   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |   
	I0416 17:31:06.379811   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0416 17:31:06.379817   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |     <dhcp>
	I0416 17:31:06.379826   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0416 17:31:06.379832   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |     </dhcp>
	I0416 17:31:06.379839   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |   </ip>
	I0416 17:31:06.379847   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG |   
	I0416 17:31:06.379863   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | </network>
	I0416 17:31:06.379873   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | 
	I0416 17:31:06.384794   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | trying to create private KVM network mk-kubernetes-upgrade-633875 192.168.39.0/24...
	I0416 17:31:06.449404   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | private KVM network mk-kubernetes-upgrade-633875 192.168.39.0/24 created
	I0416 17:31:06.449462   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875 ...
	I0416 17:31:06.449484   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 17:31:06.449515   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:06.449374   53375 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:31:06.449551   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:31:06.680880   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:06.680730   53375 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa...
	I0416 17:31:06.807990   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:06.807888   53375 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/kubernetes-upgrade-633875.rawdisk...
	I0416 17:31:06.808014   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Writing magic tar header
	I0416 17:31:06.808031   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Writing SSH key tar header
	I0416 17:31:06.808039   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:06.808001   53375 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875 ...
	I0416 17:31:06.808142   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875
	I0416 17:31:06.808241   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875 (perms=drwx------)
	I0416 17:31:06.808271   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 17:31:06.808280   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 17:31:06.808297   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:31:06.808314   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 17:31:06.808331   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 17:31:06.808348   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 17:31:06.808361   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Checking permissions on dir: /home/jenkins
	I0416 17:31:06.808379   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 17:31:06.808392   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Checking permissions on dir: /home
	I0416 17:31:06.808404   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Skipping /home - not owner
	I0416 17:31:06.808419   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 17:31:06.808431   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 17:31:06.808449   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Creating domain...
	I0416 17:31:06.809449   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) define libvirt domain using xml: 
	I0416 17:31:06.809472   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) <domain type='kvm'>
	I0416 17:31:06.809483   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   <name>kubernetes-upgrade-633875</name>
	I0416 17:31:06.809491   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   <memory unit='MiB'>2200</memory>
	I0416 17:31:06.809501   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   <vcpu>2</vcpu>
	I0416 17:31:06.809517   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   <features>
	I0416 17:31:06.809527   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <acpi/>
	I0416 17:31:06.809535   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <apic/>
	I0416 17:31:06.809545   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <pae/>
	I0416 17:31:06.809562   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     
	I0416 17:31:06.809575   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   </features>
	I0416 17:31:06.809590   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   <cpu mode='host-passthrough'>
	I0416 17:31:06.809617   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   
	I0416 17:31:06.809641   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   </cpu>
	I0416 17:31:06.809673   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   <os>
	I0416 17:31:06.809706   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <type>hvm</type>
	I0416 17:31:06.809733   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <boot dev='cdrom'/>
	I0416 17:31:06.809745   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <boot dev='hd'/>
	I0416 17:31:06.809756   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <bootmenu enable='no'/>
	I0416 17:31:06.809769   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   </os>
	I0416 17:31:06.809779   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   <devices>
	I0416 17:31:06.809790   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <disk type='file' device='cdrom'>
	I0416 17:31:06.809808   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/boot2docker.iso'/>
	I0416 17:31:06.809820   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <target dev='hdc' bus='scsi'/>
	I0416 17:31:06.809846   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <readonly/>
	I0416 17:31:06.809863   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     </disk>
	I0416 17:31:06.809877   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <disk type='file' device='disk'>
	I0416 17:31:06.809891   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 17:31:06.809914   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/kubernetes-upgrade-633875.rawdisk'/>
	I0416 17:31:06.809939   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <target dev='hda' bus='virtio'/>
	I0416 17:31:06.809952   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     </disk>
	I0416 17:31:06.809964   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <interface type='network'>
	I0416 17:31:06.809982   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <source network='mk-kubernetes-upgrade-633875'/>
	I0416 17:31:06.809994   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <model type='virtio'/>
	I0416 17:31:06.810007   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     </interface>
	I0416 17:31:06.810023   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <interface type='network'>
	I0416 17:31:06.810037   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <source network='default'/>
	I0416 17:31:06.810048   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <model type='virtio'/>
	I0416 17:31:06.810059   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     </interface>
	I0416 17:31:06.810071   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <serial type='pty'>
	I0416 17:31:06.810084   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <target port='0'/>
	I0416 17:31:06.810099   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     </serial>
	I0416 17:31:06.810113   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <console type='pty'>
	I0416 17:31:06.810126   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <target type='serial' port='0'/>
	I0416 17:31:06.810137   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     </console>
	I0416 17:31:06.810146   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     <rng model='virtio'>
	I0416 17:31:06.810160   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)       <backend model='random'>/dev/random</backend>
	I0416 17:31:06.810175   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     </rng>
	I0416 17:31:06.810187   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     
	I0416 17:31:06.810198   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)     
	I0416 17:31:06.810210   53352 main.go:141] libmachine: (kubernetes-upgrade-633875)   </devices>
	I0416 17:31:06.810218   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) </domain>
	I0416 17:31:06.810240   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) 
	I0416 17:31:06.814371   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:97:b8:69 in network default
	I0416 17:31:06.814950   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Ensuring networks are active...
	I0416 17:31:06.814972   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:06.815540   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Ensuring network default is active
	I0416 17:31:06.815902   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Ensuring network mk-kubernetes-upgrade-633875 is active
	I0416 17:31:06.816419   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Getting domain xml...
	I0416 17:31:06.817275   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Creating domain...
	I0416 17:31:08.001287   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Waiting to get IP...
	I0416 17:31:08.002189   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:08.002620   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:08.002673   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:08.002615   53375 retry.go:31] will retry after 282.176798ms: waiting for machine to come up
	I0416 17:31:08.285814   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:08.286260   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:08.286290   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:08.286209   53375 retry.go:31] will retry after 379.93705ms: waiting for machine to come up
	I0416 17:31:08.667682   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:08.668277   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:08.668304   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:08.668245   53375 retry.go:31] will retry after 431.634704ms: waiting for machine to come up
	I0416 17:31:09.101738   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:09.102161   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:09.102194   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:09.102107   53375 retry.go:31] will retry after 532.886484ms: waiting for machine to come up
	I0416 17:31:09.636786   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:09.637312   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:09.637345   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:09.637256   53375 retry.go:31] will retry after 574.250376ms: waiting for machine to come up
	I0416 17:31:10.212922   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:10.213226   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:10.213256   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:10.213190   53375 retry.go:31] will retry after 907.282411ms: waiting for machine to come up
	I0416 17:31:11.121960   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:11.122334   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:11.122368   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:11.122296   53375 retry.go:31] will retry after 1.17808768s: waiting for machine to come up
	I0416 17:31:12.301859   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:12.302221   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:12.302250   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:12.302162   53375 retry.go:31] will retry after 1.363323331s: waiting for machine to come up
	I0416 17:31:13.667553   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:13.668020   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:13.668059   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:13.667933   53375 retry.go:31] will retry after 1.785849752s: waiting for machine to come up
	I0416 17:31:15.454985   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:15.455395   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:15.455421   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:15.455363   53375 retry.go:31] will retry after 2.164054997s: waiting for machine to come up
	I0416 17:31:17.621478   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:17.621993   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:17.622008   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:17.621963   53375 retry.go:31] will retry after 2.846512178s: waiting for machine to come up
	I0416 17:31:20.469778   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:20.470226   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:20.470256   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:20.470183   53375 retry.go:31] will retry after 2.614055599s: waiting for machine to come up
	I0416 17:31:23.087055   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:23.087493   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:23.087518   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:23.087446   53375 retry.go:31] will retry after 3.053507073s: waiting for machine to come up
	I0416 17:31:26.142940   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:26.143304   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find current IP address of domain kubernetes-upgrade-633875 in network mk-kubernetes-upgrade-633875
	I0416 17:31:26.143329   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | I0416 17:31:26.143262   53375 retry.go:31] will retry after 4.252297395s: waiting for machine to come up
	I0416 17:31:30.397725   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.398132   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has current primary IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.398158   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Found IP for machine: 192.168.39.149
	I0416 17:31:30.398183   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Reserving static IP address...
	I0416 17:31:30.398490   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-633875", mac: "52:54:00:94:a1:92", ip: "192.168.39.149"} in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.469999   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Getting to WaitForSSH function...
	I0416 17:31:30.470034   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Reserved static IP address: 192.168.39.149
	I0416 17:31:30.470082   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Waiting for SSH to be available...
	I0416 17:31:30.472366   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.472745   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:30.472780   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.472872   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Using SSH client type: external
	I0416 17:31:30.472909   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa (-rw-------)
	I0416 17:31:30.472942   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:31:30.472957   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | About to run SSH command:
	I0416 17:31:30.472972   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | exit 0
	I0416 17:31:30.592616   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | SSH cmd err, output: <nil>: 
	I0416 17:31:30.592905   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) KVM machine creation complete!
	I0416 17:31:30.593394   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetConfigRaw
	I0416 17:31:30.593941   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:30.594189   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:30.594383   53352 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 17:31:30.594396   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetState
	I0416 17:31:30.595724   53352 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 17:31:30.595740   53352 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 17:31:30.595748   53352 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 17:31:30.595757   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:30.597854   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.598172   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:30.598202   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.598363   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:30.598520   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.598654   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.598799   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:30.598947   53352 main.go:141] libmachine: Using SSH client type: native
	I0416 17:31:30.599111   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:31:30.599121   53352 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 17:31:30.696045   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:31:30.696085   53352 main.go:141] libmachine: Detecting the provisioner...
	I0416 17:31:30.696095   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:30.698770   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.699106   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:30.699147   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.699259   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:30.699477   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.699687   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.699828   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:30.700000   53352 main.go:141] libmachine: Using SSH client type: native
	I0416 17:31:30.700182   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:31:30.700195   53352 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 17:31:30.797895   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 17:31:30.797960   53352 main.go:141] libmachine: found compatible host: buildroot
	I0416 17:31:30.797970   53352 main.go:141] libmachine: Provisioning with buildroot...
	I0416 17:31:30.797986   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:31:30.798230   53352 buildroot.go:166] provisioning hostname "kubernetes-upgrade-633875"
	I0416 17:31:30.798255   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:31:30.798454   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:30.800855   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.801186   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:30.801231   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.801305   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:30.801478   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.801619   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.801750   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:30.801897   53352 main.go:141] libmachine: Using SSH client type: native
	I0416 17:31:30.802053   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:31:30.802067   53352 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-633875 && echo "kubernetes-upgrade-633875" | sudo tee /etc/hostname
	I0416 17:31:30.918449   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-633875
	
	I0416 17:31:30.918478   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:30.920873   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.921163   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:30.921190   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:30.921411   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:30.921574   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.921694   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:30.921811   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:30.921934   53352 main.go:141] libmachine: Using SSH client type: native
	I0416 17:31:30.922141   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:31:30.922166   53352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-633875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-633875/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-633875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:31:31.034063   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:31:31.034088   53352 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:31:31.034115   53352 buildroot.go:174] setting up certificates
	I0416 17:31:31.034125   53352 provision.go:84] configureAuth start
	I0416 17:31:31.034132   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:31:31.034438   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:31:31.037064   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.037426   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.037453   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.037564   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:31.039807   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.040113   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.040140   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.040234   53352 provision.go:143] copyHostCerts
	I0416 17:31:31.040302   53352 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:31:31.040323   53352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:31:31.040404   53352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:31:31.040551   53352 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:31:31.040563   53352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:31:31.040603   53352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:31:31.040703   53352 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:31:31.040714   53352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:31:31.040749   53352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:31:31.040818   53352 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-633875 san=[127.0.0.1 192.168.39.149 kubernetes-upgrade-633875 localhost minikube]
	I0416 17:31:31.148321   53352 provision.go:177] copyRemoteCerts
	I0416 17:31:31.148382   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:31:31.148409   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:31.151080   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.151420   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.151447   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.151595   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:31.151775   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.151915   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:31.152045   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:31:31.232859   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:31:31.258901   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0416 17:31:31.284058   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:31:31.308875   53352 provision.go:87] duration metric: took 274.738462ms to configureAuth
	I0416 17:31:31.308898   53352 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:31:31.309043   53352 config.go:182] Loaded profile config "kubernetes-upgrade-633875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 17:31:31.309116   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:31.311535   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.311896   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.311925   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.312056   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:31.312242   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.312409   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.312542   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:31.312729   53352 main.go:141] libmachine: Using SSH client type: native
	I0416 17:31:31.312939   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:31:31.312956   53352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:31:31.582145   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:31:31.582187   53352 main.go:141] libmachine: Checking connection to Docker...
	I0416 17:31:31.582199   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetURL
	I0416 17:31:31.583348   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Using libvirt version 6000000
	I0416 17:31:31.585469   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.585755   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.585784   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.585949   53352 main.go:141] libmachine: Docker is up and running!
	I0416 17:31:31.585965   53352 main.go:141] libmachine: Reticulating splines...
	I0416 17:31:31.585973   53352 client.go:171] duration metric: took 25.209942341s to LocalClient.Create
	I0416 17:31:31.585993   53352 start.go:167] duration metric: took 25.209999268s to libmachine.API.Create "kubernetes-upgrade-633875"
	I0416 17:31:31.586007   53352 start.go:293] postStartSetup for "kubernetes-upgrade-633875" (driver="kvm2")
	I0416 17:31:31.586018   53352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:31:31.586041   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:31.586277   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:31:31.586299   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:31.588570   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.588909   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.588930   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.589061   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:31.589228   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.589356   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:31.589483   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:31:31.667386   53352 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:31:31.672480   53352 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:31:31.672505   53352 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:31:31.672569   53352 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:31:31.672639   53352 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:31:31.672718   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:31:31.682515   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:31:31.708747   53352 start.go:296] duration metric: took 122.728286ms for postStartSetup
	I0416 17:31:31.708791   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetConfigRaw
	I0416 17:31:31.709340   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:31:31.711789   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.712106   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.712137   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.712288   53352 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/config.json ...
	I0416 17:31:31.712485   53352 start.go:128] duration metric: took 25.353627608s to createHost
	I0416 17:31:31.712506   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:31.714672   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.714960   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.714987   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.715105   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:31.715327   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.715476   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.715614   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:31.715765   53352 main.go:141] libmachine: Using SSH client type: native
	I0416 17:31:31.715920   53352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:31:31.715939   53352 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 17:31:31.813825   53352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713288691.785044553
	
	I0416 17:31:31.813847   53352 fix.go:216] guest clock: 1713288691.785044553
	I0416 17:31:31.813853   53352 fix.go:229] Guest: 2024-04-16 17:31:31.785044553 +0000 UTC Remote: 2024-04-16 17:31:31.712496954 +0000 UTC m=+25.467938514 (delta=72.547599ms)
	I0416 17:31:31.813876   53352 fix.go:200] guest clock delta is within tolerance: 72.547599ms
	I0416 17:31:31.813883   53352 start.go:83] releasing machines lock for "kubernetes-upgrade-633875", held for 25.455114504s
	I0416 17:31:31.813912   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:31.814181   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:31:31.816880   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.817270   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.817299   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.817483   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:31.818002   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:31.818168   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:31:31.818272   53352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:31:31.818313   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:31.818454   53352 ssh_runner.go:195] Run: cat /version.json
	I0416 17:31:31.818477   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:31:31.820913   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.821173   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.821239   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.821264   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.821427   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:31.821589   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:31.821598   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.821615   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:31.821800   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:31:31.821804   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:31.821984   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:31:31.821995   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:31:31.822225   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:31:31.822363   53352 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:31:31.921181   53352 ssh_runner.go:195] Run: systemctl --version
	I0416 17:31:31.927680   53352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:31:32.094684   53352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:31:32.101446   53352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:31:32.101527   53352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:31:32.121912   53352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:31:32.121934   53352 start.go:494] detecting cgroup driver to use...
	I0416 17:31:32.121991   53352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:31:32.143403   53352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:31:32.160801   53352 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:31:32.160860   53352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:31:32.177827   53352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:31:32.194856   53352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:31:32.332770   53352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:31:32.498559   53352 docker.go:233] disabling docker service ...
	I0416 17:31:32.498628   53352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:31:32.514818   53352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:31:32.530125   53352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:31:32.662979   53352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:31:32.800968   53352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:31:32.815748   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:31:32.835759   53352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 17:31:32.835813   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:31:32.847728   53352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:31:32.847778   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:31:32.859387   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:31:32.871031   53352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:31:32.882652   53352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:31:32.894521   53352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:31:32.904737   53352 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:31:32.904800   53352 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:31:32.918805   53352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:31:32.929625   53352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:31:33.065018   53352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:31:33.209022   53352 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:31:33.209096   53352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:31:33.214230   53352 start.go:562] Will wait 60s for crictl version
	I0416 17:31:33.214280   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:33.218365   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:31:33.262360   53352 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:31:33.262434   53352 ssh_runner.go:195] Run: crio --version
	I0416 17:31:33.290858   53352 ssh_runner.go:195] Run: crio --version
	I0416 17:31:33.321948   53352 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 17:31:33.323343   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:31:33.325767   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:33.326097   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:31:22 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:31:33.326130   53352 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:31:33.326302   53352 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:31:33.330959   53352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:31:33.345670   53352 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:31:33.345768   53352 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 17:31:33.345827   53352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:31:33.388005   53352 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 17:31:33.388062   53352 ssh_runner.go:195] Run: which lz4
	I0416 17:31:33.392557   53352 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0416 17:31:33.397288   53352 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:31:33.397330   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 17:31:35.305767   53352 crio.go:462] duration metric: took 1.913243093s to copy over tarball
	I0416 17:31:35.305853   53352 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:31:38.247433   53352 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.941553457s)
	I0416 17:31:38.247466   53352 crio.go:469] duration metric: took 2.941661488s to extract the tarball
	I0416 17:31:38.247475   53352 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:31:38.294264   53352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:31:38.344655   53352 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 17:31:38.344679   53352 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 17:31:38.344741   53352 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:31:38.344771   53352 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:31:38.344789   53352 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:31:38.344832   53352 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:31:38.344880   53352 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:31:38.345068   53352 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 17:31:38.345080   53352 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:31:38.345427   53352 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 17:31:38.346166   53352 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 17:31:38.346176   53352 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:31:38.346190   53352 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:31:38.346209   53352 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:31:38.346255   53352 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:31:38.346270   53352 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:31:38.346205   53352 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:31:38.346616   53352 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 17:31:38.500671   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 17:31:38.504863   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:31:38.505550   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 17:31:38.508761   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:31:38.533283   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 17:31:38.538863   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:31:38.551288   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:31:38.583594   53352 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 17:31:38.583641   53352 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 17:31:38.583701   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:38.642911   53352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:31:38.688398   53352 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 17:31:38.688455   53352 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:31:38.688512   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:38.725763   53352 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 17:31:38.725818   53352 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:31:38.725862   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:38.754664   53352 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 17:31:38.754713   53352 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:31:38.754758   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:38.772034   53352 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 17:31:38.772098   53352 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 17:31:38.772111   53352 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 17:31:38.772143   53352 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:31:38.772152   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:38.772179   53352 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 17:31:38.772207   53352 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:31:38.772238   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:38.772244   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 17:31:38.772184   53352 ssh_runner.go:195] Run: which crictl
	I0416 17:31:38.899769   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:31:38.899797   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 17:31:38.899833   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:31:38.899908   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 17:31:38.899933   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:31:38.899995   53352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 17:31:38.900035   53352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:31:39.047470   53352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 17:31:39.047582   53352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 17:31:39.047598   53352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 17:31:39.047645   53352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 17:31:39.047726   53352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 17:31:39.047727   53352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 17:31:39.047774   53352 cache_images.go:92] duration metric: took 703.079674ms to LoadCachedImages
	W0416 17:31:39.047854   53352 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0416 17:31:39.047867   53352 kubeadm.go:928] updating node { 192.168.39.149 8443 v1.20.0 crio true true} ...
	I0416 17:31:39.047950   53352 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-633875 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:31:39.048004   53352 ssh_runner.go:195] Run: crio config
	I0416 17:31:39.105834   53352 cni.go:84] Creating CNI manager for ""
	I0416 17:31:39.105858   53352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:31:39.105867   53352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:31:39.105885   53352 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-633875 NodeName:kubernetes-upgrade-633875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 17:31:39.106031   53352 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-633875"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:31:39.106106   53352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 17:31:39.116476   53352 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:31:39.116543   53352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:31:39.126354   53352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0416 17:31:39.144672   53352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:31:39.162424   53352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0416 17:31:39.180517   53352 ssh_runner.go:195] Run: grep 192.168.39.149	control-plane.minikube.internal$ /etc/hosts
	I0416 17:31:39.184494   53352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:31:39.198596   53352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:31:39.324820   53352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:31:39.345655   53352 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875 for IP: 192.168.39.149
	I0416 17:31:39.345682   53352 certs.go:194] generating shared ca certs ...
	I0416 17:31:39.345702   53352 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:39.345865   53352 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:31:39.345918   53352 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:31:39.345932   53352 certs.go:256] generating profile certs ...
	I0416 17:31:39.346021   53352 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.key
	I0416 17:31:39.346043   53352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.crt with IP's: []
	I0416 17:31:39.561174   53352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.crt ...
	I0416 17:31:39.561202   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.crt: {Name:mk3b54c7a8be057c9dc6fee02be3f63dc43213b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:39.561395   53352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.key ...
	I0416 17:31:39.561411   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.key: {Name:mk9daf06f2a951d9446d40a38a25dcc510b42e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:39.561521   53352 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key.cf32f48a
	I0416 17:31:39.561547   53352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.crt.cf32f48a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.149]
	I0416 17:31:39.825022   53352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.crt.cf32f48a ...
	I0416 17:31:39.825050   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.crt.cf32f48a: {Name:mke3f12849ff2707c9824fe66fd3254917531e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:39.825229   53352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key.cf32f48a ...
	I0416 17:31:39.825251   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key.cf32f48a: {Name:mkd545a34203ee723a1a6869361e30b4fc0d1fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:39.825351   53352 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.crt.cf32f48a -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.crt
	I0416 17:31:39.825446   53352 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key.cf32f48a -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key
	I0416 17:31:39.825504   53352 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.key
	I0416 17:31:39.825522   53352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.crt with IP's: []
	I0416 17:31:40.080404   53352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.crt ...
	I0416 17:31:40.080439   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.crt: {Name:mke84f087e15813d9e5fe4fedd958379b9d3a017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:40.080620   53352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.key ...
	I0416 17:31:40.080638   53352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.key: {Name:mk092a7bbafc656de92ea1c4f63d92a1b1c7a6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:31:40.080899   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:31:40.080947   53352 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:31:40.080961   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:31:40.080996   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:31:40.081019   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:31:40.081042   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:31:40.081103   53352 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:31:40.081918   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:31:40.114510   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:31:40.143868   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:31:40.192215   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:31:40.243284   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 17:31:40.272422   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:31:40.302630   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:31:40.330318   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:31:40.358276   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:31:40.384551   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:31:40.412708   53352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:31:40.439775   53352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:31:40.460608   53352 ssh_runner.go:195] Run: openssl version
	I0416 17:31:40.467060   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:31:40.478563   53352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:31:40.483500   53352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:31:40.483543   53352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:31:40.489614   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:31:40.500753   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:31:40.511861   53352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:31:40.517221   53352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:31:40.517267   53352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:31:40.523395   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:31:40.535534   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:31:40.546953   53352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:31:40.551884   53352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:31:40.551918   53352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:31:40.557967   53352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:31:40.569322   53352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:31:40.573863   53352 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:31:40.573915   53352 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:31:40.573985   53352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:31:40.574032   53352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:31:40.610632   53352 cri.go:89] found id: ""
	I0416 17:31:40.610702   53352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 17:31:40.621242   53352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:31:40.631468   53352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:31:40.642003   53352 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:31:40.642017   53352 kubeadm.go:156] found existing configuration files:
	
	I0416 17:31:40.642056   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:31:40.658095   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:31:40.658149   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:31:40.669412   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:31:40.679752   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:31:40.679785   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:31:40.690205   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:31:40.700707   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:31:40.700738   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:31:40.711319   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:31:40.721529   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:31:40.721578   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:31:40.731996   53352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:31:40.987317   53352 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:33:38.613843   53352 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:33:38.613963   53352 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 17:33:38.615573   53352 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:33:38.615661   53352 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:33:38.615756   53352 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:33:38.615945   53352 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:33:38.616062   53352 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:33:38.616126   53352 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:33:38.618157   53352 out.go:204]   - Generating certificates and keys ...
	I0416 17:33:38.618223   53352 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:33:38.618289   53352 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:33:38.618372   53352 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:33:38.618462   53352 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:33:38.618539   53352 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:33:38.618627   53352 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:33:38.618703   53352 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:33:38.618898   53352 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-633875 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0416 17:33:38.618962   53352 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:33:38.619144   53352 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-633875 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	I0416 17:33:38.619214   53352 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:33:38.619267   53352 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:33:38.619331   53352 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:33:38.619418   53352 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:33:38.619473   53352 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:33:38.619526   53352 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:33:38.619586   53352 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:33:38.619650   53352 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:33:38.619779   53352 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:33:38.619850   53352 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:33:38.619883   53352 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:33:38.619938   53352 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:33:38.621528   53352 out.go:204]   - Booting up control plane ...
	I0416 17:33:38.621629   53352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:33:38.621697   53352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:33:38.621757   53352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:33:38.621836   53352 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:33:38.621989   53352 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:33:38.622058   53352 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:33:38.622137   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:33:38.622371   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:33:38.622434   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:33:38.622622   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:33:38.622701   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:33:38.622910   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:33:38.622995   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:33:38.623239   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:33:38.623366   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:33:38.623585   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:33:38.623594   53352 kubeadm.go:309] 
	I0416 17:33:38.623638   53352 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:33:38.623681   53352 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:33:38.623694   53352 kubeadm.go:309] 
	I0416 17:33:38.623746   53352 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:33:38.623800   53352 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:33:38.623941   53352 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:33:38.623953   53352 kubeadm.go:309] 
	I0416 17:33:38.624044   53352 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:33:38.624073   53352 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:33:38.624113   53352 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:33:38.624124   53352 kubeadm.go:309] 
	I0416 17:33:38.624240   53352 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:33:38.624317   53352 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:33:38.624325   53352 kubeadm.go:309] 
	I0416 17:33:38.624411   53352 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:33:38.624491   53352 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:33:38.624555   53352 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:33:38.624619   53352 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 17:33:38.624651   53352 kubeadm.go:309] 
	W0416 17:33:38.624747   53352 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-633875 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-633875 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-633875 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-633875 localhost] and IPs [192.168.39.149 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 17:33:38.624787   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 17:33:39.094019   53352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:33:39.110412   53352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:33:39.121531   53352 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:33:39.121554   53352 kubeadm.go:156] found existing configuration files:
	
	I0416 17:33:39.121597   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:33:39.132164   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:33:39.132216   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:33:39.143434   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:33:39.155164   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:33:39.155216   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:33:39.167481   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:33:39.177707   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:33:39.177756   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:33:39.188287   53352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:33:39.198895   53352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:33:39.198938   53352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:33:39.213971   53352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:33:39.460011   53352 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:35:36.505651   53352 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:35:36.505756   53352 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 17:35:36.507580   53352 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:35:36.507638   53352 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:35:36.507732   53352 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:35:36.507843   53352 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:35:36.507984   53352 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:35:36.508076   53352 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:35:36.510013   53352 out.go:204]   - Generating certificates and keys ...
	I0416 17:35:36.510083   53352 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:35:36.510138   53352 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:35:36.510207   53352 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 17:35:36.510267   53352 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 17:35:36.510325   53352 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 17:35:36.510378   53352 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 17:35:36.510435   53352 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 17:35:36.510502   53352 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 17:35:36.510578   53352 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 17:35:36.510647   53352 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 17:35:36.510679   53352 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 17:35:36.510756   53352 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:35:36.510833   53352 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:35:36.510914   53352 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:35:36.511002   53352 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:35:36.511092   53352 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:35:36.511236   53352 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:35:36.511340   53352 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:35:36.511383   53352 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:35:36.511487   53352 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:35:36.512925   53352 out.go:204]   - Booting up control plane ...
	I0416 17:35:36.513014   53352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:35:36.513080   53352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:35:36.513157   53352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:35:36.513249   53352 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:35:36.513453   53352 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:35:36.513530   53352 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:35:36.513588   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:36.513758   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:36.513818   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:36.513988   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:36.514059   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:36.514230   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:36.514307   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:36.514580   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:36.514684   53352 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:36.514847   53352 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:36.514857   53352 kubeadm.go:309] 
	I0416 17:35:36.514890   53352 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:35:36.514928   53352 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:35:36.514936   53352 kubeadm.go:309] 
	I0416 17:35:36.514963   53352 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:35:36.514998   53352 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:35:36.515083   53352 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:35:36.515090   53352 kubeadm.go:309] 
	I0416 17:35:36.515240   53352 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:35:36.515288   53352 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:35:36.515336   53352 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:35:36.515347   53352 kubeadm.go:309] 
	I0416 17:35:36.515468   53352 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:35:36.515570   53352 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:35:36.515580   53352 kubeadm.go:309] 
	I0416 17:35:36.515702   53352 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:35:36.515777   53352 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:35:36.515898   53352 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:35:36.516003   53352 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 17:35:36.516078   53352 kubeadm.go:393] duration metric: took 3m55.942166073s to StartCluster
	I0416 17:35:36.516088   53352 kubeadm.go:309] 
	I0416 17:35:36.516146   53352 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:35:36.516205   53352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:35:36.562019   53352 cri.go:89] found id: ""
	I0416 17:35:36.562047   53352 logs.go:276] 0 containers: []
	W0416 17:35:36.562057   53352 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:35:36.562065   53352 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:35:36.562119   53352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:35:36.599667   53352 cri.go:89] found id: ""
	I0416 17:35:36.599695   53352 logs.go:276] 0 containers: []
	W0416 17:35:36.599706   53352 logs.go:278] No container was found matching "etcd"
	I0416 17:35:36.599714   53352 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:35:36.599770   53352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:35:36.637392   53352 cri.go:89] found id: ""
	I0416 17:35:36.637417   53352 logs.go:276] 0 containers: []
	W0416 17:35:36.637426   53352 logs.go:278] No container was found matching "coredns"
	I0416 17:35:36.637434   53352 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:35:36.637491   53352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:35:36.674771   53352 cri.go:89] found id: ""
	I0416 17:35:36.674795   53352 logs.go:276] 0 containers: []
	W0416 17:35:36.674802   53352 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:35:36.674807   53352 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:35:36.674862   53352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:35:36.712377   53352 cri.go:89] found id: ""
	I0416 17:35:36.712408   53352 logs.go:276] 0 containers: []
	W0416 17:35:36.712418   53352 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:35:36.712426   53352 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:35:36.712495   53352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:35:36.748236   53352 cri.go:89] found id: ""
	I0416 17:35:36.748268   53352 logs.go:276] 0 containers: []
	W0416 17:35:36.748276   53352 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:35:36.748284   53352 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:35:36.748338   53352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:35:36.783646   53352 cri.go:89] found id: ""
	I0416 17:35:36.783667   53352 logs.go:276] 0 containers: []
	W0416 17:35:36.783675   53352 logs.go:278] No container was found matching "kindnet"
	I0416 17:35:36.783683   53352 logs.go:123] Gathering logs for kubelet ...
	I0416 17:35:36.783694   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:35:36.836199   53352 logs.go:123] Gathering logs for dmesg ...
	I0416 17:35:36.836227   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:35:36.850987   53352 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:35:36.851011   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:35:36.975416   53352 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:35:36.975438   53352 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:35:36.975450   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:35:37.086866   53352 logs.go:123] Gathering logs for container status ...
	I0416 17:35:37.086900   53352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0416 17:35:37.134188   53352 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 17:35:37.134233   53352 out.go:239] * 
	* 
	W0416 17:35:37.134284   53352 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:35:37.134304   53352 out.go:239] * 
	* 
	W0416 17:35:37.135104   53352 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:35:37.138333   53352 out.go:177] 
	W0416 17:35:37.139710   53352 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:35:37.139764   53352 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 17:35:37.139787   53352 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 17:35:37.141243   53352 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-633875
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-633875: (6.308647959s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-633875 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-633875 status --format={{.Host}}: exit status 7 (73.509351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0416 17:37:03.890482   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.980148209s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-633875 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.330881ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-633875] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-633875
	    minikube start -p kubernetes-upgrade-633875 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6338752 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-633875 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0416 17:37:10.030245   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-633875 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m45.484562085s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-16 17:38:50.198890541 +0000 UTC m=+4765.597567061
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-633875 -n kubernetes-upgrade-633875
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-633875 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-633875 logs -n 25: (1.313796753s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:26 UTC | 16 Apr 24 17:28 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| ssh     | cert-options-303502 ssh                                | cert-options-303502          | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:27 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |                |                     |                     |
	| ssh     | -p cert-options-303502 -- sudo                         | cert-options-303502          | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:27 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |                |                     |                     |
	| delete  | -p cert-options-303502                                 | cert-options-303502          | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:27 UTC |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:28 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-795352        | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-368813             | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC | 16 Apr 24 17:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512869            | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC | 16 Apr 24 17:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-795352             | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC | 16 Apr 24 17:38 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:37:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:37:04.764200   55388 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:37:04.764318   55388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:37:04.764328   55388 out.go:304] Setting ErrFile to fd 2...
	I0416 17:37:04.764333   55388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:37:04.764518   55388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:37:04.765077   55388 out.go:298] Setting JSON to false
	I0416 17:37:04.765938   55388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4777,"bootTime":1713284248,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:37:04.765996   55388 start.go:139] virtualization: kvm guest
	I0416 17:37:04.768061   55388 out.go:177] * [kubernetes-upgrade-633875] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:37:04.769412   55388 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:37:04.769409   55388 notify.go:220] Checking for updates...
	I0416 17:37:04.770894   55388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:37:04.772099   55388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:37:04.773370   55388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:37:04.774743   55388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:37:04.776092   55388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:37:04.777659   55388 config.go:182] Loaded profile config "kubernetes-upgrade-633875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:04.778033   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:04.778075   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:04.792607   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0416 17:37:04.793124   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:04.793717   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:37:04.793739   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:04.794049   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:04.794231   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:04.794500   55388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:37:04.794759   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:04.794791   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:04.808862   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37649
	I0416 17:37:04.809234   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:04.809675   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:37:04.809703   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:04.810062   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:04.810254   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:04.846580   55388 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:37:04.847941   55388 start.go:297] selected driver: kvm2
	I0416 17:37:04.847953   55388 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:37:04.848068   55388 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:37:04.848852   55388 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:37:04.848933   55388 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:37:04.863094   55388 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:37:04.863631   55388 cni.go:84] Creating CNI manager for ""
	I0416 17:37:04.863655   55388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:37:04.863706   55388 start.go:340] cluster config:
	{Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-633875 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:37:04.863864   55388 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:37:04.865575   55388 out.go:177] * Starting "kubernetes-upgrade-633875" primary control-plane node in "kubernetes-upgrade-633875" cluster
	I0416 17:37:00.567076   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:03.069630   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:04.866891   55388 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 17:37:04.866923   55388 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0416 17:37:04.866945   55388 cache.go:56] Caching tarball of preloaded images
	I0416 17:37:04.867026   55388 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:37:04.867040   55388 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0416 17:37:04.867151   55388 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/config.json ...
	I0416 17:37:04.867374   55388 start.go:360] acquireMachinesLock for kubernetes-upgrade-633875: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:37:06.350191   55388 start.go:364] duration metric: took 1.482788883s to acquireMachinesLock for "kubernetes-upgrade-633875"
	I0416 17:37:06.350255   55388 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:37:06.350276   55388 fix.go:54] fixHost starting: 
	I0416 17:37:06.350668   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:06.350717   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:06.367553   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 17:37:06.368203   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:06.369878   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:37:06.369907   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:06.370277   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:06.370464   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:06.370618   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetState
	I0416 17:37:06.372099   55388 fix.go:112] recreateIfNeeded on kubernetes-upgrade-633875: state=Running err=<nil>
	W0416 17:37:06.372128   55388 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:37:06.374023   55388 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-633875" VM ...
	I0416 17:37:04.889394   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.889918   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has current primary IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.889949   53724 main.go:141] libmachine: (no-preload-368813) Found IP for machine: 192.168.72.33
	I0416 17:37:04.889958   53724 main.go:141] libmachine: (no-preload-368813) Reserving static IP address...
	I0416 17:37:04.890418   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "no-preload-368813", mac: "52:54:00:f7:61:eb", ip: "192.168.72.33"} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:04.890447   53724 main.go:141] libmachine: (no-preload-368813) DBG | skip adding static IP to network mk-no-preload-368813 - found existing host DHCP lease matching {name: "no-preload-368813", mac: "52:54:00:f7:61:eb", ip: "192.168.72.33"}
	I0416 17:37:04.890464   53724 main.go:141] libmachine: (no-preload-368813) Reserved static IP address: 192.168.72.33
	I0416 17:37:04.890477   53724 main.go:141] libmachine: (no-preload-368813) Waiting for SSH to be available...
	I0416 17:37:04.890490   53724 main.go:141] libmachine: (no-preload-368813) DBG | Getting to WaitForSSH function...
	I0416 17:37:04.892931   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.893315   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:04.893340   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.893490   53724 main.go:141] libmachine: (no-preload-368813) DBG | Using SSH client type: external
	I0416 17:37:04.893514   53724 main.go:141] libmachine: (no-preload-368813) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa (-rw-------)
	I0416 17:37:04.893543   53724 main.go:141] libmachine: (no-preload-368813) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:37:04.893563   53724 main.go:141] libmachine: (no-preload-368813) DBG | About to run SSH command:
	I0416 17:37:04.893578   53724 main.go:141] libmachine: (no-preload-368813) DBG | exit 0
	I0416 17:37:05.021762   53724 main.go:141] libmachine: (no-preload-368813) DBG | SSH cmd err, output: <nil>: 
	I0416 17:37:05.022093   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetConfigRaw
	I0416 17:37:05.022855   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:05.025557   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.025925   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.025958   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.026136   53724 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/config.json ...
	I0416 17:37:05.026308   53724 machine.go:94] provisionDockerMachine start ...
	I0416 17:37:05.026325   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:05.026619   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.028932   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.029318   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.029354   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.029446   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.029637   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.029782   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.029933   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.030105   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.030305   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.030321   53724 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:37:05.150085   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:37:05.150126   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetMachineName
	I0416 17:37:05.150422   53724 buildroot.go:166] provisioning hostname "no-preload-368813"
	I0416 17:37:05.150454   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetMachineName
	I0416 17:37:05.150643   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.153784   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.154147   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.154185   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.154326   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.154480   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.154661   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.154784   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.154960   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.155123   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.155135   53724 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-368813 && echo "no-preload-368813" | sudo tee /etc/hostname
	I0416 17:37:05.299556   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-368813
	
	I0416 17:37:05.299585   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.302432   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.302778   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.302804   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.302997   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.303223   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.303381   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.303510   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.303659   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.303870   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.303888   53724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-368813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-368813/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-368813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:37:05.431975   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:37:05.432002   53724 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:37:05.432030   53724 buildroot.go:174] setting up certificates
	I0416 17:37:05.432040   53724 provision.go:84] configureAuth start
	I0416 17:37:05.432048   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetMachineName
	I0416 17:37:05.432369   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:05.434863   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.435262   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.435292   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.435412   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.437642   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.437996   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.438040   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.438197   53724 provision.go:143] copyHostCerts
	I0416 17:37:05.438244   53724 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:37:05.438255   53724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:37:05.438306   53724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:37:05.438440   53724 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:37:05.438455   53724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:37:05.438490   53724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:37:05.438558   53724 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:37:05.438566   53724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:37:05.438585   53724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:37:05.438633   53724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.no-preload-368813 san=[127.0.0.1 192.168.72.33 localhost minikube no-preload-368813]
	I0416 17:37:05.579937   53724 provision.go:177] copyRemoteCerts
	I0416 17:37:05.579990   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:37:05.580013   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.582601   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.582920   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.582951   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.583075   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.583244   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.583386   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.583500   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:05.676952   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:37:05.705789   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:37:05.739072   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 17:37:05.770865   53724 provision.go:87] duration metric: took 338.815509ms to configureAuth
	I0416 17:37:05.770894   53724 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:37:05.771080   53724 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:05.771178   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.773993   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.774334   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.774363   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.774508   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.774723   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.774906   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.775066   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.775252   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.775455   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.775475   53724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:37:06.087339   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:37:06.087369   53724 machine.go:97] duration metric: took 1.061049558s to provisionDockerMachine
	I0416 17:37:06.087380   53724 start.go:293] postStartSetup for "no-preload-368813" (driver="kvm2")
	I0416 17:37:06.087391   53724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:37:06.087406   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.087718   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:37:06.087751   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.090496   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.090907   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.090940   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.091130   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.091301   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.091461   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.091606   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:06.183788   53724 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:37:06.188831   53724 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:37:06.188866   53724 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:37:06.188930   53724 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:37:06.189008   53724 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:37:06.189090   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:37:06.201361   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:37:06.229472   53724 start.go:296] duration metric: took 142.079309ms for postStartSetup
	I0416 17:37:06.229516   53724 fix.go:56] duration metric: took 19.74706223s for fixHost
	I0416 17:37:06.229540   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.232137   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.232482   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.232516   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.232682   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.232903   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.233082   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.233223   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.233412   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.233650   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:06.233663   53724 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:37:06.350010   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289026.321915296
	
	I0416 17:37:06.350036   53724 fix.go:216] guest clock: 1713289026.321915296
	I0416 17:37:06.350045   53724 fix.go:229] Guest: 2024-04-16 17:37:06.321915296 +0000 UTC Remote: 2024-04-16 17:37:06.229520511 +0000 UTC m=+336.716982241 (delta=92.394785ms)
	I0416 17:37:06.350086   53724 fix.go:200] guest clock delta is within tolerance: 92.394785ms
	I0416 17:37:06.350096   53724 start.go:83] releasing machines lock for "no-preload-368813", held for 19.867678127s
	I0416 17:37:06.350130   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.350445   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:06.353155   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.353565   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.353601   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.353712   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.354248   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.354441   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.354510   53724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:37:06.354558   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.354676   53724 ssh_runner.go:195] Run: cat /version.json
	I0416 17:37:06.354701   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.357402   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.357437   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.357726   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.357752   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.357849   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.357848   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.357874   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.358010   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.358120   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.358181   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.358259   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.358341   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:06.358428   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.358576   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:06.471357   53724 ssh_runner.go:195] Run: systemctl --version
	I0416 17:37:06.478216   53724 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:37:06.628508   53724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:37:06.637713   53724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:37:06.637786   53724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:37:06.662717   53724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:37:06.662741   53724 start.go:494] detecting cgroup driver to use...
	I0416 17:37:06.662806   53724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:37:06.685365   53724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:37:06.705771   53724 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:37:06.705857   53724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:37:06.723890   53724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:37:06.739861   53724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:37:06.866653   53724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:37:07.029166   53724 docker.go:233] disabling docker service ...
	I0416 17:37:07.029242   53724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:37:07.045705   53724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:37:07.060441   53724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:37:07.200010   53724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:37:07.341930   53724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:37:07.358423   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:37:07.381694   53724 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:37:07.381764   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.394648   53724 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:37:07.394714   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.408756   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.420986   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.434883   53724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:37:07.449279   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.463375   53724 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.484682   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.498345   53724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:37:07.510414   53724 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:37:07.510485   53724 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:37:07.526274   53724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:37:07.537928   53724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:07.687822   53724 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:37:07.851570   53724 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:37:07.851660   53724 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:37:07.857638   53724 start.go:562] Will wait 60s for crictl version
	I0416 17:37:07.857694   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:07.862026   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:37:07.911220   53724 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:37:07.911303   53724 ssh_runner.go:195] Run: crio --version
	I0416 17:37:07.942172   53724 ssh_runner.go:195] Run: crio --version
	I0416 17:37:07.987215   53724 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 17:37:07.988643   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:07.992015   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:07.992372   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:07.992412   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:07.992625   53724 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0416 17:37:07.997913   53724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:37:08.015198   53724 kubeadm.go:877] updating cluster {Name:no-preload-368813 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-368813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:37:08.015319   53724 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 17:37:08.015349   53724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:37:08.061694   53724 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 17:37:08.061724   53724 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 17:37:08.061791   53724 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.062005   53724 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.062135   53724 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.062258   53724 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.062373   53724 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.062529   53724 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 17:37:08.062671   53724 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.062788   53724 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.064021   53724 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.064250   53724 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.064478   53724 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.064501   53724 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.064635   53724 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.064686   53724 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.064705   53724 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.064646   53724 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 17:37:08.232497   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.236554   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.241828   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 17:37:08.245226   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.251937   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.269175   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.271121   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.325571   53724 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 17:37:08.325619   53724 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.325668   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.345391   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.444138   53724 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 17:37:08.444190   53724 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.444242   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548066   53724 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 17:37:08.548103   53724 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 17:37:08.548115   53724 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.548130   53724 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.548161   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548163   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548207   53724 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 17:37:08.548241   53724 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 17:37:08.548248   53724 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.548269   53724 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.548288   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548306   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548335   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.548373   53724 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 17:37:08.548398   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.548402   53724 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.548443   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.615779   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.615810   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 17:37:08.615820   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.615859   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.615871   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.615783   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.615897   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 17:37:08.615945   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 17:37:08.616042   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 17:37:08.748677   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:08.748786   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:08.748784   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:08.748958   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:08.749462   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 17:37:08.749524   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:08.749541   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 17:37:08.749547   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 17:37:08.749553   53724 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 17:37:08.749590   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 17:37:08.749596   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 17:37:08.749591   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:08.749657   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 17:37:08.749630   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 17:37:08.760550   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 17:37:08.761129   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 17:37:06.375230   55388 machine.go:94] provisionDockerMachine start ...
	I0416 17:37:06.375251   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:06.375442   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.377827   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.378205   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.378230   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.378391   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:06.378563   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.378729   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.378849   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:06.378986   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.379226   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:06.379241   55388 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:37:06.494024   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-633875
	
	I0416 17:37:06.494053   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:37:06.494319   55388 buildroot.go:166] provisioning hostname "kubernetes-upgrade-633875"
	I0416 17:37:06.494348   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:37:06.494524   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.497487   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.497892   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.497922   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.498052   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:06.498248   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.498408   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.498540   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:06.498751   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.498974   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:06.498991   55388 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-633875 && echo "kubernetes-upgrade-633875" | sudo tee /etc/hostname
	I0416 17:37:06.636590   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-633875
	
	I0416 17:37:06.636629   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.639776   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.640182   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.640212   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.640418   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:06.640591   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.640751   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.640932   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:06.641136   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.641301   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:06.641319   55388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-633875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-633875/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-633875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:37:06.767180   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:37:06.767207   55388 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:37:06.767249   55388 buildroot.go:174] setting up certificates
	I0416 17:37:06.767266   55388 provision.go:84] configureAuth start
	I0416 17:37:06.767291   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:37:06.767594   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:37:06.770532   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.770926   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.770976   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.771124   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.773394   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.773809   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.773836   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.774061   55388 provision.go:143] copyHostCerts
	I0416 17:37:06.774121   55388 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:37:06.774142   55388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:37:06.774210   55388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:37:06.774341   55388 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:37:06.774355   55388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:37:06.774387   55388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:37:06.774484   55388 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:37:06.774497   55388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:37:06.774530   55388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:37:06.774619   55388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-633875 san=[127.0.0.1 192.168.39.149 kubernetes-upgrade-633875 localhost minikube]
	I0416 17:37:07.210423   55388 provision.go:177] copyRemoteCerts
	I0416 17:37:07.210501   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:37:07.210530   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:07.213438   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.213842   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:07.213878   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.213972   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:07.214172   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:07.214359   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:07.214508   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:07.305628   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:37:07.334474   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0416 17:37:07.369595   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:37:07.403200   55388 provision.go:87] duration metric: took 635.902682ms to configureAuth
	I0416 17:37:07.403228   55388 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:37:07.403420   55388 config.go:182] Loaded profile config "kubernetes-upgrade-633875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:07.403510   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:07.406659   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.407098   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:07.407123   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.407325   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:07.407508   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:07.407712   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:07.407879   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:07.408051   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:07.408252   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:07.408270   55388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:37:08.476953   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:37:08.476978   55388 machine.go:97] duration metric: took 2.10173376s to provisionDockerMachine
	I0416 17:37:08.476990   55388 start.go:293] postStartSetup for "kubernetes-upgrade-633875" (driver="kvm2")
	I0416 17:37:08.477005   55388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:37:08.477023   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.477353   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:37:08.477390   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.480308   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.480674   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.480703   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.480878   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.481076   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.481276   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.481407   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:08.573233   55388 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:37:08.578701   55388 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:37:08.578730   55388 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:37:08.578800   55388 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:37:08.578909   55388 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:37:08.579046   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:37:08.594792   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:37:08.631328   55388 start.go:296] duration metric: took 154.326696ms for postStartSetup
	I0416 17:37:08.631361   55388 fix.go:56] duration metric: took 2.281095817s for fixHost
	I0416 17:37:08.631383   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.634352   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.634683   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.634712   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.635020   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.635245   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.635425   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.635628   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.635806   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:08.636007   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:08.636027   55388 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:37:08.755600   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289028.702903253
	
	I0416 17:37:08.755624   55388 fix.go:216] guest clock: 1713289028.702903253
	I0416 17:37:08.755633   55388 fix.go:229] Guest: 2024-04-16 17:37:08.702903253 +0000 UTC Remote: 2024-04-16 17:37:08.631364556 +0000 UTC m=+3.913384729 (delta=71.538697ms)
	I0416 17:37:08.755661   55388 fix.go:200] guest clock delta is within tolerance: 71.538697ms
	I0416 17:37:08.755667   55388 start.go:83] releasing machines lock for "kubernetes-upgrade-633875", held for 2.405434774s
	I0416 17:37:08.755693   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.755971   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:37:08.759403   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.759848   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.759881   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.760046   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.760648   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.760857   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.760941   55388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:37:08.760976   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.761080   55388 ssh_runner.go:195] Run: cat /version.json
	I0416 17:37:08.761102   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.764060   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764402   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764690   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.764719   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764782   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.764813   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764998   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.765092   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.765299   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.765373   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.765450   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.765567   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.765638   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:08.765713   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:08.922322   55388 ssh_runner.go:195] Run: systemctl --version
	I0416 17:37:08.949599   55388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:37:09.267176   55388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:37:09.305504   55388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:37:09.305574   55388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:37:09.338140   55388 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 17:37:09.338165   55388 start.go:494] detecting cgroup driver to use...
	I0416 17:37:09.338233   55388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:37:09.426636   55388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:37:09.474429   55388 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:37:09.474497   55388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:37:09.500206   55388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:37:09.518014   55388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:37:09.711725   55388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:37:05.564972   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:07.565601   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:09.569287   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:10.865114   53724 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.115431361s)
	I0416 17:37:10.865158   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 17:37:10.865276   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.115660537s)
	I0416 17:37:10.865308   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 17:37:10.865327   53724 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.115674106s)
	I0416 17:37:10.865354   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 17:37:10.865337   53724 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 17:37:10.865373   53724 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.115809755s)
	I0416 17:37:10.865391   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 17:37:10.865409   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 17:37:09.920225   55388 docker.go:233] disabling docker service ...
	I0416 17:37:09.920299   55388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:37:09.950938   55388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:37:09.973589   55388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:37:10.185781   55388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:37:10.385437   55388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:37:10.403356   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:37:10.434851   55388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:37:10.434947   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.453473   55388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:37:10.453544   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.471529   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.488189   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.504551   55388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:37:10.522872   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.535463   55388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.549552   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.562459   55388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:37:10.574700   55388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:37:10.586226   55388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:10.756003   55388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:37:12.068056   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:14.565051   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:14.878395   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.012956596s)
	I0416 17:37:14.878427   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 17:37:14.878451   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:14.878497   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:16.947627   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.069101064s)
	I0416 17:37:16.947655   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 17:37:16.947682   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:16.947732   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:19.215393   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.267634517s)
	I0416 17:37:19.215430   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 17:37:19.215458   53724 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 17:37:19.215507   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 17:37:16.566813   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:19.064680   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:19.970020   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 17:37:19.970068   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:19.970123   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:22.424392   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.454240217s)
	I0416 17:37:22.424418   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 17:37:22.424446   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 17:37:22.424505   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 17:37:21.564890   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:23.566319   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:24.586584   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.16205441s)
	I0416 17:37:24.586610   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 17:37:24.586641   53724 cache_images.go:123] Successfully loaded all cached images
	I0416 17:37:24.586647   53724 cache_images.go:92] duration metric: took 16.524908979s to LoadCachedImages
	I0416 17:37:24.586657   53724 kubeadm.go:928] updating node { 192.168.72.33 8443 v1.30.0-rc.2 crio true true} ...
	I0416 17:37:24.586774   53724 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-368813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-368813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:37:24.586854   53724 ssh_runner.go:195] Run: crio config
	I0416 17:37:24.645059   53724 cni.go:84] Creating CNI manager for ""
	I0416 17:37:24.645089   53724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:37:24.645103   53724 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:37:24.645132   53724 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.33 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-368813 NodeName:no-preload-368813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:37:24.645282   53724 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-368813"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:37:24.645344   53724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 17:37:24.659269   53724 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:37:24.659766   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:37:24.672455   53724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0416 17:37:24.693131   53724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 17:37:24.713433   53724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0416 17:37:24.734204   53724 ssh_runner.go:195] Run: grep 192.168.72.33	control-plane.minikube.internal$ /etc/hosts
	I0416 17:37:24.738626   53724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:37:24.752746   53724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:24.885615   53724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:37:24.904188   53724 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813 for IP: 192.168.72.33
	I0416 17:37:24.904208   53724 certs.go:194] generating shared ca certs ...
	I0416 17:37:24.904227   53724 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:37:24.904403   53724 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:37:24.904459   53724 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:37:24.904470   53724 certs.go:256] generating profile certs ...
	I0416 17:37:24.904575   53724 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.key
	I0416 17:37:24.904656   53724 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/apiserver.key.dde448ea
	I0416 17:37:24.904711   53724 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/proxy-client.key
	I0416 17:37:24.904874   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:37:24.904912   53724 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:37:24.904938   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:37:24.904980   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:37:24.905030   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:37:24.905062   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:37:24.905116   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:37:24.905888   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:37:24.938183   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:37:24.966084   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:37:24.993879   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:37:25.027746   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:37:25.053149   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:37:25.089639   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:37:25.116547   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:37:25.141964   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:37:25.167574   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:37:25.193102   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:37:25.218836   53724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:37:25.237210   53724 ssh_runner.go:195] Run: openssl version
	I0416 17:37:25.243344   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:37:25.255714   53724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:37:25.260656   53724 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:37:25.260721   53724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:37:25.267057   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:37:25.279172   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:37:25.291391   53724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:37:25.296938   53724 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:37:25.296972   53724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:37:25.303026   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:37:25.315351   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:37:25.327627   53724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:37:25.332320   53724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:37:25.332355   53724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:37:25.338610   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:37:25.350961   53724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:37:25.356003   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:37:25.362451   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:37:25.368848   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:37:25.375257   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:37:25.381547   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:37:25.387670   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:37:25.393994   53724 kubeadm.go:391] StartCluster: {Name:no-preload-368813 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-368813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:37:25.394072   53724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:37:25.394104   53724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:37:25.438139   53724 cri.go:89] found id: ""
	I0416 17:37:25.438216   53724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 17:37:25.450096   53724 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 17:37:25.450114   53724 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 17:37:25.450119   53724 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 17:37:25.450162   53724 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 17:37:25.461706   53724 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:37:25.462998   53724 kubeconfig.go:125] found "no-preload-368813" server: "https://192.168.72.33:8443"
	I0416 17:37:25.465272   53724 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 17:37:25.476435   53724 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.33
	I0416 17:37:25.476462   53724 kubeadm.go:1154] stopping kube-system containers ...
	I0416 17:37:25.476471   53724 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 17:37:25.476511   53724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:37:25.518010   53724 cri.go:89] found id: ""
	I0416 17:37:25.518097   53724 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 17:37:25.536784   53724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:37:25.550182   53724 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:37:25.550198   53724 kubeadm.go:156] found existing configuration files:
	
	I0416 17:37:25.550265   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:37:25.562463   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:37:25.562514   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:37:25.575053   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:37:25.587142   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:37:25.587190   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:37:25.599571   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:37:25.611495   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:37:25.611534   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:37:25.623888   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:37:25.636118   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:37:25.636166   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:37:25.648781   53724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:37:25.661134   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:25.783423   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:26.746855   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:26.978330   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:27.075325   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:27.196663   53724 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:37:27.196746   53724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:37:27.696969   53724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:37:28.197025   53724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:37:28.281883   53724 api_server.go:72] duration metric: took 1.085219632s to wait for apiserver process to appear ...
	I0416 17:37:28.281914   53724 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:37:28.281955   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:26.065178   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:28.067229   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:31.430709   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:37:31.430738   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:37:31.430752   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:31.460238   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:37:31.460263   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:37:31.782156   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:31.786676   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 17:37:31.786708   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 17:37:32.282799   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:32.287374   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 17:37:32.287396   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 17:37:32.783063   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:32.788958   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 200:
	ok
	I0416 17:37:32.801262   53724 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 17:37:32.801294   53724 api_server.go:131] duration metric: took 4.519371789s to wait for apiserver health ...
	I0416 17:37:32.801309   53724 cni.go:84] Creating CNI manager for ""
	I0416 17:37:32.801317   53724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:37:32.802960   53724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 17:37:32.804534   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 17:37:32.831035   53724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 17:37:32.865460   53724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:37:32.875725   53724 system_pods.go:59] 8 kube-system pods found
	I0416 17:37:32.875754   53724 system_pods.go:61] "coredns-7db6d8ff4d-69lpx" [b3b140b9-fe8c-4289-94d3-df5f8ee50485] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 17:37:32.875761   53724 system_pods.go:61] "etcd-no-preload-368813" [df27fe8b-1b49-444c-93a7-dbc4e9842cb2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 17:37:32.875768   53724 system_pods.go:61] "kube-apiserver-no-preload-368813" [0b4479c4-5c25-45b2-8ffc-4e974eb41a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 17:37:32.875773   53724 system_pods.go:61] "kube-controller-manager-no-preload-368813" [99df4534-f626-4a7f-9835-ca4935ce4a35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 17:37:32.875779   53724 system_pods.go:61] "kube-proxy-jtn9f" [b64c6a20-cc25-4ea9-9c41-8dac9f537332] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 17:37:32.875784   53724 system_pods.go:61] "kube-scheduler-no-preload-368813" [eccdb209-897b-4f20-ac38-506769602cc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 17:37:32.875788   53724 system_pods.go:61] "metrics-server-569cc877fc-tt8vp" [6c42b82b-7ff1-4f18-a387-a2c7b06adb63] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 17:37:32.875793   53724 system_pods.go:61] "storage-provisioner" [c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 17:37:32.875799   53724 system_pods.go:74] duration metric: took 10.321803ms to wait for pod list to return data ...
	I0416 17:37:32.875805   53724 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:37:32.879090   53724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:37:32.879117   53724 node_conditions.go:123] node cpu capacity is 2
	I0416 17:37:32.879133   53724 node_conditions.go:105] duration metric: took 3.322937ms to run NodePressure ...
	I0416 17:37:32.879152   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:33.168696   53724 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 17:37:33.174450   53724 kubeadm.go:733] kubelet initialised
	I0416 17:37:33.174470   53724 kubeadm.go:734] duration metric: took 5.749269ms waiting for restarted kubelet to initialise ...
	I0416 17:37:33.174476   53724 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:37:33.179502   53724 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.184350   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.184369   53724 pod_ready.go:81] duration metric: took 4.846155ms for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.184377   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.184383   53724 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.191851   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "etcd-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.191873   53724 pod_ready.go:81] duration metric: took 7.48224ms for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.191883   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "etcd-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.191891   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.196552   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-apiserver-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.196570   53724 pod_ready.go:81] duration metric: took 4.672597ms for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.196577   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-apiserver-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.196582   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.272397   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.272425   53724 pod_ready.go:81] duration metric: took 75.834666ms for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.272434   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.272440   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.669448   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-proxy-jtn9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.669478   53724 pod_ready.go:81] duration metric: took 397.031738ms for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.669486   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-proxy-jtn9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.669493   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:34.069026   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-scheduler-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.069052   53724 pod_ready.go:81] duration metric: took 399.552424ms for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:34.069061   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-scheduler-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.069066   53724 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:34.469216   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.469238   53724 pod_ready.go:81] duration metric: took 400.163808ms for pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:34.469247   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.469254   53724 pod_ready.go:38] duration metric: took 1.294770407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:37:34.469271   53724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:37:34.482299   53724 ops.go:34] apiserver oom_adj: -16
	I0416 17:37:34.482324   53724 kubeadm.go:591] duration metric: took 9.032199177s to restartPrimaryControlPlane
	I0416 17:37:34.482334   53724 kubeadm.go:393] duration metric: took 9.088344142s to StartCluster
	I0416 17:37:34.482350   53724 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:37:34.482418   53724 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:37:34.484027   53724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:37:34.484259   53724 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:37:34.486190   53724 out.go:177] * Verifying Kubernetes components...
	I0416 17:37:34.484366   53724 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:37:34.484449   53724 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:34.487436   53724 addons.go:69] Setting default-storageclass=true in profile "no-preload-368813"
	I0416 17:37:34.487445   53724 addons.go:69] Setting metrics-server=true in profile "no-preload-368813"
	I0416 17:37:34.487452   53724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:34.487468   53724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-368813"
	I0416 17:37:34.487475   53724 addons.go:234] Setting addon metrics-server=true in "no-preload-368813"
	W0416 17:37:34.487483   53724 addons.go:243] addon metrics-server should already be in state true
	I0416 17:37:34.487506   53724 host.go:66] Checking if "no-preload-368813" exists ...
	I0416 17:37:34.487437   53724 addons.go:69] Setting storage-provisioner=true in profile "no-preload-368813"
	I0416 17:37:34.487541   53724 addons.go:234] Setting addon storage-provisioner=true in "no-preload-368813"
	W0416 17:37:34.487555   53724 addons.go:243] addon storage-provisioner should already be in state true
	I0416 17:37:34.487584   53724 host.go:66] Checking if "no-preload-368813" exists ...
	I0416 17:37:34.487823   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.487855   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.487867   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.487895   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.487951   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.487983   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.504274   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0416 17:37:34.504426   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0416 17:37:34.504652   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.504883   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.505178   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.505207   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.505368   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.505390   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.505578   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.505720   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.505779   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.506261   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.506294   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.506850   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0416 17:37:34.507371   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.507842   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.507868   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.508214   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.508765   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.508814   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.509191   53724 addons.go:234] Setting addon default-storageclass=true in "no-preload-368813"
	W0416 17:37:34.509209   53724 addons.go:243] addon default-storageclass should already be in state true
	I0416 17:37:34.509236   53724 host.go:66] Checking if "no-preload-368813" exists ...
	I0416 17:37:34.509521   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.509555   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.522208   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0416 17:37:34.522634   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.523123   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.523151   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.523339   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0416 17:37:34.523492   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.523648   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.523706   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.524155   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.524184   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.524511   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.524690   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.525300   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:34.527243   53724 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 17:37:34.528539   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 17:37:34.528555   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 17:37:34.528573   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:34.526300   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:34.528313   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0416 17:37:34.530050   53724 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:34.529061   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.531155   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.531489   53724 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:37:34.531513   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:37:34.531528   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:34.531581   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:34.531607   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.531737   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:34.531904   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:34.532051   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:34.532067   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.532087   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.532282   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:34.532454   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.533039   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.533083   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.534355   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.534689   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:34.534716   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.534868   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:34.535215   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:34.535355   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:34.535489   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:30.565630   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:32.566619   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:35.066221   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:34.580095   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0416 17:37:34.580488   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.580956   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.580981   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.581299   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.581514   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.582947   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:34.583186   53724 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:37:34.583199   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:37:34.583211   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:34.585917   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.586281   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:34.586309   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.586515   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:34.586905   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:34.587115   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:34.587295   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:34.696222   53724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:37:34.719179   53724 node_ready.go:35] waiting up to 6m0s for node "no-preload-368813" to be "Ready" ...
	I0416 17:37:34.782980   53724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:37:34.798957   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 17:37:34.798986   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 17:37:34.837727   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 17:37:34.837753   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 17:37:34.840957   53724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:37:34.879657   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 17:37:34.879676   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 17:37:34.934346   53724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 17:37:35.223556   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.223578   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.223889   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.223904   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.223913   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.223920   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.223930   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.224159   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.224181   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.224198   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.229835   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.229852   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.230093   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.230105   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.230109   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.893916   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.893935   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894076   53724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.053083319s)
	I0416 17:37:35.894130   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.894147   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894316   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894332   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894337   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.894362   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894374   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894382   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.894389   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894340   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.894460   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894597   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894611   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894673   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894687   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894705   53724 addons.go:470] Verifying addon metrics-server=true in "no-preload-368813"
	I0416 17:37:35.897547   53724 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 17:37:35.898886   53724 addons.go:505] duration metric: took 1.414544018s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 17:37:36.722873   53724 node_ready.go:53] node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:39.223607   53724 node_ready.go:53] node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:37.565118   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:40.064645   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:41.722887   53724 node_ready.go:53] node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:42.225863   53724 node_ready.go:49] node "no-preload-368813" has status "Ready":"True"
	I0416 17:37:42.225883   53724 node_ready.go:38] duration metric: took 7.506668596s for node "no-preload-368813" to be "Ready" ...
	I0416 17:37:42.225891   53724 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:37:42.232019   53724 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:42.239399   53724 pod_ready.go:92] pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:42.239424   53724 pod_ready.go:81] duration metric: took 7.382463ms for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:42.239434   53724 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:44.245133   53724 pod_ready.go:102] pod "etcd-no-preload-368813" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:42.564211   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:44.564866   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:45.746505   53724 pod_ready.go:92] pod "etcd-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.746524   53724 pod_ready.go:81] duration metric: took 3.507082575s for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.746533   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.751714   53724 pod_ready.go:92] pod "kube-apiserver-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.751735   53724 pod_ready.go:81] duration metric: took 5.194687ms for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.751744   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.757023   53724 pod_ready.go:92] pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.757044   53724 pod_ready.go:81] duration metric: took 5.292895ms for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.757055   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.762143   53724 pod_ready.go:92] pod "kube-proxy-jtn9f" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.762160   53724 pod_ready.go:81] duration metric: took 5.099368ms for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.762168   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.824087   53724 pod_ready.go:92] pod "kube-scheduler-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.824114   53724 pod_ready.go:81] duration metric: took 61.936492ms for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.824127   53724 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:47.833773   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:47.064361   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:49.065629   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:50.332513   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:52.829819   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:51.564287   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:53.565257   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:54.832367   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:57.333539   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:56.063366   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:58.064649   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:59.830643   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:01.830706   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:03.831546   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:00.564098   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:02.564321   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:05.064376   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:06.332358   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:08.332809   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:07.066411   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:09.564507   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:10.335688   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:12.831165   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:12.065479   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:14.564685   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:14.831349   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:17.334921   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:16.565159   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:19.064669   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:21.456413   52649 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:38:21.456505   52649 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 17:38:21.458335   52649 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:38:21.458412   52649 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:38:21.458508   52649 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:38:21.458643   52649 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:38:21.458785   52649 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:38:21.458894   52649 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:38:21.460865   52649 out.go:204]   - Generating certificates and keys ...
	I0416 17:38:21.460958   52649 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:38:21.461049   52649 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:38:21.461155   52649 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 17:38:21.461246   52649 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 17:38:21.461344   52649 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 17:38:21.461405   52649 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 17:38:21.461459   52649 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 17:38:21.461510   52649 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 17:38:21.461577   52649 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 17:38:21.461655   52649 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 17:38:21.461693   52649 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 17:38:21.461742   52649 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:38:21.461785   52649 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:38:21.461863   52649 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:38:21.461929   52649 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:38:21.462002   52649 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:38:21.462136   52649 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:38:21.462265   52649 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:38:21.462335   52649 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:38:21.462420   52649 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:38:21.463927   52649 out.go:204]   - Booting up control plane ...
	I0416 17:38:21.464008   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:38:21.464082   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:38:21.464158   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:38:21.464243   52649 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:38:21.464465   52649 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:38:21.464563   52649 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:38:21.464669   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.464832   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.464919   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465080   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465137   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465369   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465440   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465617   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465696   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465892   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465907   52649 kubeadm.go:309] 
	I0416 17:38:21.465940   52649 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:38:21.465975   52649 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:38:21.465982   52649 kubeadm.go:309] 
	I0416 17:38:21.466011   52649 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:38:21.466040   52649 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:38:21.466153   52649 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:38:21.466164   52649 kubeadm.go:309] 
	I0416 17:38:21.466251   52649 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:38:21.466289   52649 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:38:21.466329   52649 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:38:21.466340   52649 kubeadm.go:309] 
	I0416 17:38:21.466452   52649 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:38:21.466521   52649 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:38:21.466529   52649 kubeadm.go:309] 
	I0416 17:38:21.466622   52649 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:38:21.466695   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:38:21.466765   52649 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:38:21.466830   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 17:38:21.466852   52649 kubeadm.go:309] 
	I0416 17:38:21.466885   52649 kubeadm.go:393] duration metric: took 8m3.560726976s to StartCluster
	I0416 17:38:21.466921   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:38:21.466981   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:38:21.517447   52649 cri.go:89] found id: ""
	I0416 17:38:21.517474   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.517485   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:38:21.517493   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:38:21.517556   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:38:21.558224   52649 cri.go:89] found id: ""
	I0416 17:38:21.558250   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.558260   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:38:21.558267   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:38:21.558326   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:38:21.608680   52649 cri.go:89] found id: ""
	I0416 17:38:21.608712   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.608727   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:38:21.608735   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:38:21.608786   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:38:21.648819   52649 cri.go:89] found id: ""
	I0416 17:38:21.648860   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.648867   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:38:21.648873   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:38:21.648917   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:38:21.689263   52649 cri.go:89] found id: ""
	I0416 17:38:21.689300   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.689310   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:38:21.689317   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:38:21.689374   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:38:21.729665   52649 cri.go:89] found id: ""
	I0416 17:38:21.729694   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.729703   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:38:21.729709   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:38:21.729755   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:38:21.768070   52649 cri.go:89] found id: ""
	I0416 17:38:21.768096   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.768103   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:38:21.768109   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:38:21.768158   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:38:21.803401   52649 cri.go:89] found id: ""
	I0416 17:38:21.803425   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.803435   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:38:21.803446   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:38:21.803461   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:38:21.859787   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:38:21.859820   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:38:21.874861   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:38:21.874887   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:38:21.962673   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:38:21.962700   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:38:21.962713   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:38:22.072141   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:38:22.072172   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0416 17:38:22.120555   52649 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 17:38:22.120603   52649 out.go:239] * 
	W0416 17:38:22.120651   52649 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:38:22.120675   52649 out.go:239] * 
	W0416 17:38:22.121636   52649 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:38:22.125185   52649 out.go:177] 
	W0416 17:38:22.126349   52649 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:38:22.126406   52649 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 17:38:22.126429   52649 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 17:38:22.127951   52649 out.go:177] 
	I0416 17:38:19.830879   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:21.836064   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:24.332210   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:21.566924   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:23.566969   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:26.332432   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:28.830548   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:25.569807   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:28.064148   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:30.831963   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:33.331209   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:30.564750   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:32.567402   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:35.065445   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:35.831533   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:37.831796   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:37.065899   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:39.067471   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:41.069216   55388 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.313174547s)
	I0416 17:38:41.069253   55388 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:38:41.069301   55388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:38:41.076898   55388 start.go:562] Will wait 60s for crictl version
	I0416 17:38:41.076951   55388 ssh_runner.go:195] Run: which crictl
	I0416 17:38:41.081337   55388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:38:41.128104   55388 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:38:41.128172   55388 ssh_runner.go:195] Run: crio --version
	I0416 17:38:41.159002   55388 ssh_runner.go:195] Run: crio --version
	I0416 17:38:41.195472   55388 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 17:38:40.330704   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:42.831448   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:41.196957   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:38:41.200164   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:38:41.200612   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:38:41.200644   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:38:41.200877   55388 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:38:41.206656   55388 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:38:41.206805   55388 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 17:38:41.206875   55388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:38:41.257627   55388 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:38:41.257655   55388 crio.go:433] Images already preloaded, skipping extraction
	I0416 17:38:41.257762   55388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:38:41.305018   55388 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:38:41.305046   55388 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:38:41.305055   55388 kubeadm.go:928] updating node { 192.168.39.149 8443 v1.30.0-rc.2 crio true true} ...
	I0416 17:38:41.305173   55388 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-633875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:38:41.305248   55388 ssh_runner.go:195] Run: crio config
	I0416 17:38:41.359664   55388 cni.go:84] Creating CNI manager for ""
	I0416 17:38:41.359696   55388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:38:41.359715   55388 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:38:41.359744   55388 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.149 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-633875 NodeName:kubernetes-upgrade-633875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:38:41.359908   55388 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-633875"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:38:41.359979   55388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 17:38:41.373144   55388 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:38:41.373207   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:38:41.385917   55388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0416 17:38:41.405308   55388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 17:38:41.423864   55388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0416 17:38:41.443249   55388 ssh_runner.go:195] Run: grep 192.168.39.149	control-plane.minikube.internal$ /etc/hosts
	I0416 17:38:41.448875   55388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:38:41.590435   55388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:38:41.609835   55388 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875 for IP: 192.168.39.149
	I0416 17:38:41.609859   55388 certs.go:194] generating shared ca certs ...
	I0416 17:38:41.609883   55388 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:38:41.610053   55388 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:38:41.610092   55388 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:38:41.610101   55388 certs.go:256] generating profile certs ...
	I0416 17:38:41.610187   55388 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.key
	I0416 17:38:41.610228   55388 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key.cf32f48a
	I0416 17:38:41.610261   55388 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.key
	I0416 17:38:41.610369   55388 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:38:41.610401   55388 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:38:41.610411   55388 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:38:41.610438   55388 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:38:41.610465   55388 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:38:41.610492   55388 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:38:41.610527   55388 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:38:41.611132   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:38:41.639209   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:38:41.668015   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:38:41.699514   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:38:41.731845   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 17:38:41.760581   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:38:41.788600   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:38:41.816005   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 17:38:41.845209   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:38:41.872367   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:38:41.900529   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:38:41.929178   55388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:38:41.948433   55388 ssh_runner.go:195] Run: openssl version
	I0416 17:38:41.954947   55388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:38:41.967875   55388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:38:41.972786   55388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:38:41.972853   55388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:38:41.979000   55388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:38:41.989155   55388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:38:42.001147   55388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:38:42.006473   55388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:38:42.006524   55388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:38:42.013327   55388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:38:42.024143   55388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:38:42.036259   55388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:38:42.041932   55388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:38:42.041986   55388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:38:42.048336   55388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:38:42.059578   55388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:38:42.065212   55388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:38:42.071687   55388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:38:42.078229   55388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:38:42.085183   55388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:38:42.091386   55388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:38:42.097647   55388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:38:42.103890   55388 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:38:42.103977   55388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:38:42.104028   55388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:38:42.155231   55388 cri.go:89] found id: "34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5"
	I0416 17:38:42.155258   55388 cri.go:89] found id: "52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462"
	I0416 17:38:42.155264   55388 cri.go:89] found id: "064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4"
	I0416 17:38:42.155269   55388 cri.go:89] found id: "290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd"
	I0416 17:38:42.155278   55388 cri.go:89] found id: "7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552"
	I0416 17:38:42.155282   55388 cri.go:89] found id: "9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86"
	I0416 17:38:42.155286   55388 cri.go:89] found id: "58da5adfcfe9a58525a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0"
	I0416 17:38:42.155290   55388 cri.go:89] found id: ""
	I0416 17:38:42.155337   55388 ssh_runner.go:195] Run: sudo runc list -f json
	I0416 17:38:42.190019   55388 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4/userdata","rootfs":"/var/lib/containers/storage/overlay/01fd4c20ea1fe1c18346a216d14a88e48b33e2433642ea927d9f2b918ddd576b/merged","created":"2024-04-16T17:37:09.145318748Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1e34585d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1e34585d\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-04-16T17:37:09.028083935Z","io.kubernetes.cri-o.Image":"461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.30.0-rc.2","io.kubernetes.cri-o.ImageRef":"461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-633875\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"dfe9712396e09e330c5a7eb325febfc6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-633875_dfe9712396e09e330c5a7eb325febfc6/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\
":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/01fd4c20ea1fe1c18346a216d14a88e48b33e2433642ea927d9f2b918ddd576b/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/
pods/dfe9712396e09e330c5a7eb325febfc6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/dfe9712396e09e330c5a7eb325febfc6/containers/kube-scheduler/8d07deaa\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.hash":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.seen":"2024-04-16T17:36:55.586119768Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd","pid":0,"status":"stopped","b
undle":"/run/containers/storage/overlay-containers/290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd/userdata","rootfs":"/var/lib/containers/storage/overlay/31a4e7507da52eae9a130d65d20e8d87d27932f14d6b56155c6f0ecd141a8bd8/merged","created":"2024-04-16T17:36:56.553833142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6217f75","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6217f75\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd","io.kubernetes.cr
i-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-04-16T17:36:56.431149927Z","io.kubernetes.cri-o.Image":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-633875\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"115373a09145343a060ea5d2d8311604\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-633875_115373a09145343a060ea5d2d8311604/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/31a4e7507da52eae9a130d65d20e8d87d27932f14d6b56155c6f0ecd141a8bd8/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a
060ea5d2d8311604_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/115373a09145343a060ea5d2d8311604/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/115373a09145343a060ea5d2d8311604/containers/etcd/1680a1a3\",\"readonly\":false,\"propagation\":0,\"selinux_re
label\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"115373a09145343a060ea5d2d8311604","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.149:2379","kubernetes.io/config.hash":"115373a09145343a060ea5d2d8311604","kubernetes.io/config.seen":"2024-04-16T17:36:55.620424521Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6
da610617ecd2313bb28/userdata","rootfs":"/var/lib/containers/storage/overlay/9d0f1fe371ca69a9372f2657240927ad854aae401e115354d409810378fa2127/merged","created":"2024-04-16T17:36:56.210033095Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"b99f019b232accbb33fa16cc1df6908f\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.149:8443\",\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.586114311Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podb99f019b232accbb33fa16cc1df6908f","io.kubernetes.cri-o.ContainerID":"31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T1
7:36:56.095843734Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-633875\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"b99f019b232accbb33fa16cc1df6908f\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-633875_b99f019b232accbb33fa16cc1df6908f/31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28.log","io.kubernetes.cri-o.Metadata
":"{\"name\":\"kube-apiserver-kubernetes-upgrade-633875\",\"uid\":\"b99f019b232accbb33fa16cc1df6908f\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9d0f1fe371ca69a9372f2657240927ad854aae401e115354d409810378fa2127/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"31ad31dab21ae3
2ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b99f019b232accbb33fa16cc1df6908f","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.149:8443","kubernetes.io/config.hash":"b99f019b232accbb33fa16cc1df6908f","kubernetes.io/config.seen":"2024-04-16T17:36:55.586114311Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-contai
ners/34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5/userdata","rootfs":"/var/lib/containers/storage/overlay/0139d99c680355bb69b61933bce0d7c9bc0dedaf6b66b77ff646a876ae9628e5/merged","created":"2024-04-16T17:37:09.378661899Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6217f75","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6217f75\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.
cri-o.Created":"2024-04-16T17:37:09.23163257Z","io.kubernetes.cri-o.Image":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-633875\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"115373a09145343a060ea5d2d8311604\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-633875_115373a09145343a060ea5d2d8311604/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0139d99c680355bb69b61933bce0d7c9bc0dedaf6b66b77ff646a876ae9628e5/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_1","io.kubernete
s.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/115373a09145343a060ea5d2d8311604/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/115373a09145343a060ea5d2d8311604/containers/etcd/8dea6fff\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\
":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"115373a09145343a060ea5d2d8311604","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.149:2379","kubernetes.io/config.hash":"115373a09145343a060ea5d2d8311604","kubernetes.io/config.seen":"2024-04-16T17:36:55.620424521Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462/userdata","ro
otfs":"/var/lib/containers/storage/overlay/446a882dcd8517d15b7b6b958b591d3cd30d0eae87df0ddf64379eda8e8e676d/merged","created":"2024-04-16T17:37:09.238055718Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd70a4e3","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd70a4e3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-04-16T17:37:09.093943705Z","io.kubernetes.cri-o.Im
age":"65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.0-rc.2","io.kubernetes.cri-o.ImageRef":"65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-633875\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b99f019b232accbb33fa16cc1df6908f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-633875_b99f019b232accbb33fa16cc1df6908f/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/446a882dcd8517d15b7b6b958b591d3cd30d0eae87df0ddf64379eda8e8e676d/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_1"
,"io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b99f019b232accbb33fa16cc1df6908f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b99f019b232accbb33fa16cc1df6908f/containers/kube-apiserver/a4dedb27\",\"readonly\":false,\"propagation\":0,\"selinux_r
elabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b99f019b232accbb33fa16cc1df6908f","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.149:8443","kubernetes.io/config.hash":"b99f019b232accbb33fa16cc1df6908f","kubernetes.io/config.seen":"2024-04-16T17:36:55.586114311Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"58da5adfcfe9a58525a2761adbeb2f9e3bb58ff800
53ab9f967a6f9e304669c0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/58da5adfcfe9a58525a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0/userdata","rootfs":"/var/lib/containers/storage/overlay/111281b8c26a91cb1925eb4bfd522b04042b671ce5aea5d2e6035314ce3e6a78/merged","created":"2024-04-16T17:36:56.362877005Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd70a4e3","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd70a4e3\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"58da5adfcfe9a58525
a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-04-16T17:36:56.291427349Z","io.kubernetes.cri-o.Image":"65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.0-rc.2","io.kubernetes.cri-o.ImageRef":"65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-633875\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b99f019b232accbb33fa16cc1df6908f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-633875_b99f019b232accbb33fa16cc1df6908f/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/111281b8c26a91cb1925eb4bfd522b04042b671c
e5aea5d2e6035314ce3e6a78/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b99f019b232accbb33fa16cc1df6908f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termin
ation-log\",\"host_path\":\"/var/lib/kubelet/pods/b99f019b232accbb33fa16cc1df6908f/containers/kube-apiserver/272d60f6\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b99f019b232accbb33fa16cc1df6908f","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.149:8443","kubernetes.io/config.hash":"b99f019b232accbb33fa16cc1df6908f","kubernetes.io/config
.seen":"2024-04-16T17:36:55.586114311Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731/userdata","rootfs":"/var/lib/containers/storage/overlay/d43d7b3329799c5c8c6570aa3203ff673781c44e0de40ce42dc41fd721d8e144/merged","created":"2024-04-16T17:36:56.241611933Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"115373a09145343a060ea5d2d8311604\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.149:2379\",\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.620424521Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod115373a09145343a060ea5d2d8311604","io.kubernetes.cri-o.Con
tainerID":"6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T17:36:56.108079284Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"115373a09145343a060ea5d2d8311604\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-633875\",\"component\":\"etcd\",\"tier\":\"control-plane\",\"io.kubernet
es.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-633875_115373a09145343a060ea5d2d8311604/6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-633875\",\"uid\":\"115373a09145343a060ea5d2d8311604\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d43d7b3329799c5c8c6570aa3203ff673781c44e0de40ce42dc41fd721d8e144/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.Reso
lvPath":"/var/run/containers/storage/overlay-containers/6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"115373a09145343a060ea5d2d8311604","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.149:2379","kubernetes.io/config.hash":"115373a09145343a060ea5d2d8311604","kubernetes.io/config.seen":"2024-04-16T17:36:55.620424521Z","kubernetes.io/config.source":"file","tier":"co
ntrol-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc/userdata","rootfs":"/var/lib/containers/storage/overlay/b926f786e0400d8a81453a1b468af571f89015452894b6085f83d5510e294386/merged","created":"2024-04-16T17:37:08.990113094Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.620424521Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"115373a09145343a060ea5d2d8311604\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.39.149:2379\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod115373a09145343a060ea5d2d8311604","io.kubernetes.cri-o.ContainerID":"7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c
766a21273fc","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T17:37:08.850542085Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"115373a09145343a060ea5d2d8311604\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-633875\",\"tier\":\"control-plane\",\"component\":\"etcd\"}","io.kubernetes.cri-o.LogPath":"/va
r/log/pods/kube-system_etcd-kubernetes-upgrade-633875_115373a09145343a060ea5d2d8311604/7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-633875\",\"uid\":\"115373a09145343a060ea5d2d8311604\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b926f786e0400d8a81453a1b468af571f89015452894b6085f83d5510e294386/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-conta
iners/7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-633875_kube-system_115373a09145343a060ea5d2d8311604_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"115373a09145343a060ea5d2d8311604","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.149:2379","kubernetes.io/config.hash":"115373a09145343a060ea5d2d8311604","kubernetes.io/config.seen":"2024-04-16T17:36:55.620424521Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2
-dev","id":"7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552/userdata","rootfs":"/var/lib/containers/storage/overlay/ea1df843ccc3002f9104ed9425b737edc3ddfbc33cab6814017ca4153e8f90e0/merged","created":"2024-04-16T17:36:56.506418072Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c80ec39b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c80ec39b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\"
:\"30\"}","io.kubernetes.cri-o.ContainerID":"7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-04-16T17:36:56.416371803Z","io.kubernetes.cri-o.Image":"ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.30.0-rc.2","io.kubernetes.cri-o.ImageRef":"ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-633875\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d6a43e7dc4ba35745d26de8ff0be2595\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-633875_d6a43e7dc4ba35745d26de8ff0be2595/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manag
er\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ea1df843ccc3002f9104ed9425b737edc3ddfbc33cab6814017ca4153e8f90e0/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pod
s/d6a43e7dc4ba35745d26de8ff0be2595/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d6a43e7dc4ba35745d26de8ff0be2595/containers/kube-controller-manager/19b014ce\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-p
lugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d6a43e7dc4ba35745d26de8ff0be2595","kubernetes.io/config.hash":"d6a43e7dc4ba35745d26de8ff0be2595","kubernetes.io/config.seen":"2024-04-16T17:36:55.586118673Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86/userdata","rootfs":"/var/lib/containers/storage/overlay/7eaec337b527cf5ce7dd54c5b0ddaf328d631d2b9901633721de9a6c7fbe62ff/merged","created":"2024-04-16T17:36:56.448394901Z","annotations":{"io.container.manager":"cri-o",
"io.kubernetes.container.hash":"1e34585d","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1e34585d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-04-16T17:36:56.385114192Z","io.kubernetes.cri-o.Image":"461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.30.0-rc.2","io.kubernetes.cri-o.ImageRef":"461015b94df4b9e0beae696
3e44faa05142f2bddf16b1956a2c09ccefe0416a6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-633875\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"dfe9712396e09e330c5a7eb325febfc6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-633875_dfe9712396e09e330c5a7eb325febfc6/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7eaec337b527cf5ce7dd54c5b0ddaf328d631d2b9901633721de9a6c7fbe62ff/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2/userdata/resolv.conf","io.k
ubernetes.cri-o.SandboxID":"efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/dfe9712396e09e330c5a7eb325febfc6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/dfe9712396e09e330c5a7eb325febfc6/containers/kube-scheduler/6d36c835\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-sch
eduler-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.hash":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.seen":"2024-04-16T17:36:55.586119768Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a/userdata","rootfs":"/var/lib/containers/storage/overlay/9717fa1a5501d1055d986768899734f4fb6b08e63bdd35e092738722b1b7ae10/merged","created":"2024-04-16T17:37:08.900541573Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"dfe9712396e09e330c5a7eb32
5febfc6\",\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.586119768Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/poddfe9712396e09e330c5a7eb325febfc6","io.kubernetes.cri-o.ContainerID":"b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T17:37:08.828628848Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.na
mespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-633875\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"dfe9712396e09e330c5a7eb325febfc6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-633875_dfe9712396e09e330c5a7eb325febfc6/b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-633875\",\"uid\":\"dfe9712396e09e330c5a7eb325febfc6\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9717fa1a5501d1055d986768899734f4fb6b08e63bdd35e092738722b1b7ae10/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.
kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernete
s.pod.uid":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.hash":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.seen":"2024-04-16T17:36:55.586119768Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db/userdata","rootfs":"/var/lib/containers/storage/overlay/af504a230c1f8837221e170e7458746fa8f8eee9798f297448fe555a8cc38038/merged","created":"2024-04-16T17:36:56.231561362Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.586118673Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"d6a43e7dc4ba35745d26de8ff0be2595\"}","io.kubernetes.cri-o.CgroupPar
ent":"/kubepods/burstable/podd6a43e7dc4ba35745d26de8ff0be2595","io.kubernetes.cri-o.ContainerID":"b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T17:36:56.091894742Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"d6a43e7dc4ba35745d26de8ff0be2595\",\"io.kubernetes.pod.namespace\":\"kube-system\",
\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-633875\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-633875_d6a43e7dc4ba35745d26de8ff0be2595/b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-633875\",\"uid\":\"d6a43e7dc4ba35745d26de8ff0be2595\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/af504a230c1f8837221e170e7458746fa8f8eee9798f297448fe555a8cc38038/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri
-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"d6a43e7dc4ba35745d26de8ff0be
2595","kubernetes.io/config.hash":"d6a43e7dc4ba35745d26de8ff0be2595","kubernetes.io/config.seen":"2024-04-16T17:36:55.586118673Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953/userdata","rootfs":"/var/lib/containers/storage/overlay/50c050df7c9f3047b02bd421ce964416da0190dfcc7a39e53130df373f662f39/merged","created":"2024-04-16T17:37:08.992583315Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.586118673Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"d6a43e7dc4ba35745d26de8ff0be2595\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podd6a43e7dc4b
a35745d26de8ff0be2595","io.kubernetes.cri-o.ContainerID":"b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T17:37:08.829628426Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-633875\",\"tier\":\"control-pla
ne\",\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"d6a43e7dc4ba35745d26de8ff0be2595\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-633875_d6a43e7dc4ba35745d26de8ff0be2595/b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-633875\",\"uid\":\"d6a43e7dc4ba35745d26de8ff0be2595\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/50c050df7c9f3047b02bd421ce964416da0190dfcc7a39e53130df373f662f39/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"
cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"d6a43e7dc4ba35745d26de8ff0be2595","kubernetes.io/confi
g.hash":"d6a43e7dc4ba35745d26de8ff0be2595","kubernetes.io/config.seen":"2024-04-16T17:36:55.586118673Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce/userdata","rootfs":"/var/lib/containers/storage/overlay/cff9d0aa7fdd031718ede0159945b29ff72fb4bc8ebda4e04ced76f3989ee088/merged","created":"2024-04-16T17:37:08.9394355Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.586114311Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"b99f019b232accbb33fa16cc1df6908f\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.39.149:8443\"}","io.kubernetes.
cri-o.CgroupParent":"/kubepods/burstable/podb99f019b232accbb33fa16cc1df6908f","io.kubernetes.cri-o.ContainerID":"b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T17:37:08.851507713Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-633
875\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"b99f019b232accbb33fa16cc1df6908f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-633875_b99f019b232accbb33fa16cc1df6908f/b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-633875\",\"uid\":\"b99f019b232accbb33fa16cc1df6908f\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cff9d0aa7fdd031718ede0159945b29ff72fb4bc8ebda4e04ced76f3989ee088/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_peri
od\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-633875_kube-system_b99f019b232accbb33fa16cc1df6908f_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"b99f019b232accbb33fa16cc1df6908f","kubeadm.kubernetes.io/kube-apiserver.advertis
e-address.endpoint":"192.168.39.149:8443","kubernetes.io/config.hash":"b99f019b232accbb33fa16cc1df6908f","kubernetes.io/config.seen":"2024-04-16T17:36:55.586114311Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2/userdata","rootfs":"/var/lib/containers/storage/overlay/10b4d4d5a4303a66c11850dac26d09c5c2e4b666737298cb1b5d73079bf9fda9/merged","created":"2024-04-16T17:36:56.179549416Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"dfe9712396e09e330c5a7eb325febfc6\",\"kubernetes.io/config.seen\":\"2024-04-16T17:36:55.586119768Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepo
ds/burstable/poddfe9712396e09e330c5a7eb325febfc6","io.kubernetes.cri-o.ContainerID":"efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-04-16T17:36:56.093552854Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-633875","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-633875","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"dfe9712396e09e330c5a7eb325febfc6\",\"io.kubernetes.pod.namespace\":\"kube
-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-633875\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-633875_dfe9712396e09e330c5a7eb325febfc6/efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-633875\",\"uid\":\"dfe9712396e09e330c5a7eb325febfc6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/10b4d4d5a4303a66c11850dac26d09c5c2e4b666737298cb1b5d73079bf9fda9/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\"
:{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-633875_kube-system_dfe9712396e09e330c5a7eb325febfc6_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-633875","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.hash":"dfe9712396e09e330c5a7eb325febfc6","kubernetes.io/config.see
n":"2024-04-16T17:36:55.586119768Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"}]
	I0416 17:38:42.190753   55388 cri.go:126] list returned 15 containers
	I0416 17:38:42.190770   55388 cri.go:129] container: {ID:064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4 Status:stopped}
	I0416 17:38:42.190800   55388 cri.go:135] skipping {064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4 stopped}: state = "stopped", want "paused"
	I0416 17:38:42.190820   55388 cri.go:129] container: {ID:290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd Status:stopped}
	I0416 17:38:42.190828   55388 cri.go:135] skipping {290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd stopped}: state = "stopped", want "paused"
	I0416 17:38:42.190837   55388 cri.go:129] container: {ID:31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28 Status:stopped}
	I0416 17:38:42.190849   55388 cri.go:131] skipping 31ad31dab21ae32ca3c57f4b1b55d870d0f7c1ed4e4e6da610617ecd2313bb28 - not in ps
	I0416 17:38:42.190857   55388 cri.go:129] container: {ID:34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5 Status:stopped}
	I0416 17:38:42.190869   55388 cri.go:135] skipping {34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5 stopped}: state = "stopped", want "paused"
	I0416 17:38:42.190886   55388 cri.go:129] container: {ID:52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462 Status:stopped}
	I0416 17:38:42.190895   55388 cri.go:135] skipping {52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462 stopped}: state = "stopped", want "paused"
	I0416 17:38:42.190903   55388 cri.go:129] container: {ID:58da5adfcfe9a58525a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0 Status:stopped}
	I0416 17:38:42.190940   55388 cri.go:135] skipping {58da5adfcfe9a58525a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0 stopped}: state = "stopped", want "paused"
	I0416 17:38:42.190955   55388 cri.go:129] container: {ID:6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731 Status:stopped}
	I0416 17:38:42.190962   55388 cri.go:131] skipping 6d02115c452243dba00c46947a8e2b794c12312faee4c8f2d96f45e058c4b731 - not in ps
	I0416 17:38:42.190967   55388 cri.go:129] container: {ID:7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc Status:stopped}
	I0416 17:38:42.190975   55388 cri.go:131] skipping 7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc - not in ps
	I0416 17:38:42.190979   55388 cri.go:129] container: {ID:7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552 Status:stopped}
	I0416 17:38:42.190989   55388 cri.go:135] skipping {7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552 stopped}: state = "stopped", want "paused"
	I0416 17:38:42.191000   55388 cri.go:129] container: {ID:9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86 Status:stopped}
	I0416 17:38:42.191011   55388 cri.go:135] skipping {9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86 stopped}: state = "stopped", want "paused"
	I0416 17:38:42.191019   55388 cri.go:129] container: {ID:b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a Status:stopped}
	I0416 17:38:42.191025   55388 cri.go:131] skipping b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a - not in ps
	I0416 17:38:42.191032   55388 cri.go:129] container: {ID:b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db Status:stopped}
	I0416 17:38:42.191041   55388 cri.go:131] skipping b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db - not in ps
	I0416 17:38:42.191047   55388 cri.go:129] container: {ID:b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953 Status:stopped}
	I0416 17:38:42.191053   55388 cri.go:131] skipping b50a87b261e70f1a23a5687c273abb1501795d918f404e857b9fa75e58d48953 - not in ps
	I0416 17:38:42.191056   55388 cri.go:129] container: {ID:b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce Status:stopped}
	I0416 17:38:42.191062   55388 cri.go:131] skipping b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce - not in ps
	I0416 17:38:42.191064   55388 cri.go:129] container: {ID:efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2 Status:stopped}
	I0416 17:38:42.191070   55388 cri.go:131] skipping efb3ffa2dd3992c8316ddb9b805d0cfd077b165bafdad3aa665ef6a59965a1f2 - not in ps
	I0416 17:38:42.191110   55388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 17:38:42.202315   55388 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 17:38:42.202337   55388 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 17:38:42.202341   55388 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 17:38:42.202387   55388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 17:38:42.212916   55388 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:38:42.213898   55388 kubeconfig.go:125] found "kubernetes-upgrade-633875" server: "https://192.168.39.149:8443"
	I0416 17:38:42.215632   55388 kapi.go:59] client config for kubernetes-upgrade-633875: &rest.Config{Host:"https://192.168.39.149:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:38:42.216324   55388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 17:38:42.226739   55388 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.149
	I0416 17:38:42.226778   55388 kubeadm.go:1154] stopping kube-system containers ...
	I0416 17:38:42.226790   55388 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 17:38:42.226844   55388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:38:42.266216   55388 cri.go:89] found id: "34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5"
	I0416 17:38:42.266238   55388 cri.go:89] found id: "52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462"
	I0416 17:38:42.266241   55388 cri.go:89] found id: "064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4"
	I0416 17:38:42.266245   55388 cri.go:89] found id: "290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd"
	I0416 17:38:42.266251   55388 cri.go:89] found id: "7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552"
	I0416 17:38:42.266253   55388 cri.go:89] found id: "9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86"
	I0416 17:38:42.266256   55388 cri.go:89] found id: "58da5adfcfe9a58525a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0"
	I0416 17:38:42.266258   55388 cri.go:89] found id: ""
	I0416 17:38:42.266263   55388 cri.go:234] Stopping containers: [34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5 52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462 064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4 290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd 7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552 9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86 58da5adfcfe9a58525a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0]
	I0416 17:38:42.266304   55388 ssh_runner.go:195] Run: which crictl
	I0416 17:38:42.270676   55388 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5 52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462 064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4 290a4445fcfadb2a74b3e28b6372cf285f23f427e9d3a8acc7386fc8c7669fdd 7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552 9023f498c2a6449185f992efd3cbad15e3f4210ff92c517df8456048d0058e86 58da5adfcfe9a58525a2761adbeb2f9e3bb58ff80053ab9f967a6f9e304669c0
	I0416 17:38:42.349954   55388 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 17:38:42.391735   55388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:38:42.402553   55388 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Apr 16 17:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Apr 16 17:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Apr 16 17:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Apr 16 17:36 /etc/kubernetes/scheduler.conf
	
	I0416 17:38:42.402612   55388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:38:42.413074   55388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:38:42.423122   55388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:38:42.432751   55388 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:38:42.432805   55388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:38:42.442599   55388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:38:42.452473   55388 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:38:42.452525   55388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:38:42.463137   55388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:38:42.473404   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:38:42.541212   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:38:43.329661   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:38:43.561396   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:38:43.637673   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:38:43.743747   55388 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:38:43.743847   55388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:38:44.244865   55388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:38:44.744646   55388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:38:41.565090   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:43.566341   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:44.783638   55388 api_server.go:72] duration metric: took 1.039889931s to wait for apiserver process to appear ...
	I0416 17:38:44.783670   55388 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:38:44.783693   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:44.784409   55388 api_server.go:269] stopped: https://192.168.39.149:8443/healthz: Get "https://192.168.39.149:8443/healthz": dial tcp 192.168.39.149:8443: connect: connection refused
	I0416 17:38:45.283814   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:47.174838   55388 api_server.go:279] https://192.168.39.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:38:47.174872   55388 api_server.go:103] status: https://192.168.39.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:38:47.174886   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:47.224993   55388 api_server.go:279] https://192.168.39.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:38:47.225025   55388 api_server.go:103] status: https://192.168.39.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:38:47.284208   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:47.290731   55388 api_server.go:279] https://192.168.39.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 17:38:47.290764   55388 api_server.go:103] status: https://192.168.39.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 17:38:47.784359   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:47.788771   55388 api_server.go:279] https://192.168.39.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 17:38:47.788801   55388 api_server.go:103] status: https://192.168.39.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 17:38:48.284396   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:48.288937   55388 api_server.go:279] https://192.168.39.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 17:38:48.288968   55388 api_server.go:103] status: https://192.168.39.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 17:38:48.784124   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:48.788643   55388 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0416 17:38:48.794998   55388 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 17:38:48.795023   55388 api_server.go:131] duration metric: took 4.011345822s to wait for apiserver health ...
	I0416 17:38:48.795031   55388 cni.go:84] Creating CNI manager for ""
	I0416 17:38:48.795037   55388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:38:48.796971   55388 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 17:38:48.798339   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 17:38:48.810095   55388 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 17:38:48.833315   55388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:38:48.842482   55388 system_pods.go:59] 4 kube-system pods found
	I0416 17:38:48.842519   55388 system_pods.go:61] "etcd-kubernetes-upgrade-633875" [8cfb060d-3e0e-4b6a-b6cf-7fce5d2f1ad6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 17:38:48.842539   55388 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-633875" [0ffbb104-8332-4d80-956a-fb455f40fcc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 17:38:48.842556   55388 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-633875" [35fcf618-76aa-489d-be60-d50fcd3edccc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 17:38:48.842572   55388 system_pods.go:61] "storage-provisioner" [65369de2-bc12-4629-b5db-2cab5bed2e41] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0416 17:38:48.842580   55388 system_pods.go:74] duration metric: took 9.245609ms to wait for pod list to return data ...
	I0416 17:38:48.842592   55388 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:38:48.846075   55388 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:38:48.846097   55388 node_conditions.go:123] node cpu capacity is 2
	I0416 17:38:48.846106   55388 node_conditions.go:105] duration metric: took 3.509545ms to run NodePressure ...
	I0416 17:38:48.846120   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:38:49.157602   55388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:38:49.170654   55388 ops.go:34] apiserver oom_adj: -16
	I0416 17:38:49.170676   55388 kubeadm.go:591] duration metric: took 6.968329003s to restartPrimaryControlPlane
	I0416 17:38:49.170684   55388 kubeadm.go:393] duration metric: took 7.066807196s to StartCluster
	I0416 17:38:49.170698   55388 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:38:49.170778   55388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:38:49.173243   55388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:38:49.173558   55388 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:38:49.175163   55388 out.go:177] * Verifying Kubernetes components...
	I0416 17:38:49.173594   55388 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:38:49.173756   55388 config.go:182] Loaded profile config "kubernetes-upgrade-633875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:38:49.175197   55388 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-633875"
	I0416 17:38:49.175217   55388 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-633875"
	I0416 17:38:49.175244   55388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-633875"
	I0416 17:38:49.176484   55388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:38:49.175248   55388 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-633875"
	W0416 17:38:49.176511   55388 addons.go:243] addon storage-provisioner should already be in state true
	I0416 17:38:49.176547   55388 host.go:66] Checking if "kubernetes-upgrade-633875" exists ...
	I0416 17:38:49.176766   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:38:49.176794   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:38:49.176917   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:38:49.176956   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:38:49.192744   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0416 17:38:49.193254   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:38:49.193825   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:38:49.193851   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:38:49.194246   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:38:49.194470   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetState
	I0416 17:38:49.196552   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0416 17:38:49.197033   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:38:49.197483   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:38:49.197497   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:38:49.197831   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:38:49.197908   55388 kapi.go:59] client config for kubernetes-upgrade-633875: &rest.Config{Host:"https://192.168.39.149:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.crt", KeyFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/client.key", CAFile:"/home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 17:38:49.198191   55388 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-633875"
	W0416 17:38:49.198209   55388 addons.go:243] addon default-storageclass should already be in state true
	I0416 17:38:49.198247   55388 host.go:66] Checking if "kubernetes-upgrade-633875" exists ...
	I0416 17:38:49.198367   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:38:49.198400   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:38:49.198615   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:38:49.198678   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:38:49.213424   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0416 17:38:49.213618   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0416 17:38:49.213962   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:38:49.214367   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:38:49.214382   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:38:49.214698   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:38:49.215010   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:38:49.215079   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetState
	I0416 17:38:49.215663   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:38:49.215677   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:38:49.216250   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:38:49.216764   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:38:49.216863   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:38:49.216908   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:38:49.218859   55388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:38:44.834910   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:47.338454   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:49.220209   55388 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:38:49.220227   55388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:38:49.220243   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:38:49.223307   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:38:49.223693   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:38:49.223715   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:38:49.223974   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:38:49.224145   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:38:49.224295   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:38:49.224444   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:38:49.236936   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0416 17:38:49.237647   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:38:49.238091   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:38:49.238118   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:38:49.238511   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:38:49.238743   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetState
	I0416 17:38:49.240291   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:38:49.240620   55388 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:38:49.240831   55388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:38:49.240876   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:38:49.243066   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:38:49.243374   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:38:49.243398   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:38:49.243574   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:38:49.243778   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:38:49.243952   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:38:49.244116   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:38:49.352567   55388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:38:49.372607   55388 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:38:49.372682   55388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:38:49.388023   55388 api_server.go:72] duration metric: took 214.429259ms to wait for apiserver process to appear ...
	I0416 17:38:49.388042   55388 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:38:49.388057   55388 api_server.go:253] Checking apiserver healthz at https://192.168.39.149:8443/healthz ...
	I0416 17:38:49.393483   55388 api_server.go:279] https://192.168.39.149:8443/healthz returned 200:
	ok
	I0416 17:38:49.394480   55388 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 17:38:49.394497   55388 api_server.go:131] duration metric: took 6.449138ms to wait for apiserver health ...
	I0416 17:38:49.394505   55388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:38:49.399034   55388 system_pods.go:59] 4 kube-system pods found
	I0416 17:38:49.399066   55388 system_pods.go:61] "etcd-kubernetes-upgrade-633875" [8cfb060d-3e0e-4b6a-b6cf-7fce5d2f1ad6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 17:38:49.399095   55388 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-633875" [0ffbb104-8332-4d80-956a-fb455f40fcc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 17:38:49.399112   55388 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-633875" [35fcf618-76aa-489d-be60-d50fcd3edccc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 17:38:49.399123   55388 system_pods.go:61] "storage-provisioner" [65369de2-bc12-4629-b5db-2cab5bed2e41] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0416 17:38:49.399132   55388 system_pods.go:74] duration metric: took 4.615541ms to wait for pod list to return data ...
	I0416 17:38:49.399146   55388 kubeadm.go:576] duration metric: took 225.555007ms to wait for: map[apiserver:true system_pods:true]
	I0416 17:38:49.399165   55388 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:38:49.401743   55388 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:38:49.401760   55388 node_conditions.go:123] node cpu capacity is 2
	I0416 17:38:49.401768   55388 node_conditions.go:105] duration metric: took 2.59795ms to run NodePressure ...
	I0416 17:38:49.401778   55388 start.go:240] waiting for startup goroutines ...
	I0416 17:38:49.462978   55388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:38:49.483747   55388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:38:50.120674   55388 main.go:141] libmachine: Making call to close driver server
	I0416 17:38:50.120705   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .Close
	I0416 17:38:50.120715   55388 main.go:141] libmachine: Making call to close driver server
	I0416 17:38:50.120729   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .Close
	I0416 17:38:50.121024   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Closing plugin on server side
	I0416 17:38:50.121031   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Closing plugin on server side
	I0416 17:38:50.121061   55388 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:38:50.121099   55388 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:38:50.121194   55388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:38:50.121236   55388 main.go:141] libmachine: Making call to close driver server
	I0416 17:38:50.121251   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .Close
	I0416 17:38:50.121250   55388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:38:50.121263   55388 main.go:141] libmachine: Making call to close driver server
	I0416 17:38:50.121275   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .Close
	I0416 17:38:50.121495   55388 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:38:50.121521   55388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:38:50.121580   55388 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:38:50.121581   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Closing plugin on server side
	I0416 17:38:50.121590   55388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:38:50.128509   55388 main.go:141] libmachine: Making call to close driver server
	I0416 17:38:50.128525   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .Close
	I0416 17:38:50.128750   55388 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:38:50.128765   55388 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:38:50.128785   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | Closing plugin on server side
	I0416 17:38:50.130649   55388 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 17:38:50.131939   55388 addons.go:505] duration metric: took 958.348902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 17:38:50.131980   55388 start.go:245] waiting for cluster config update ...
	I0416 17:38:50.131994   55388 start.go:254] writing updated cluster config ...
	I0416 17:38:50.132231   55388 ssh_runner.go:195] Run: rm -f paused
	I0416 17:38:50.181881   55388 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0416 17:38:50.183528   55388 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-633875" cluster and "default" namespace by default
	I0416 17:38:46.065177   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:48.065275   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:50.065516   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 16 17:38:50 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:50.970067510Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289130970033969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4282809a-074b-4e50-b871-3eba6fd03774 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:50 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:50.971004835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae1f438e-ab30-4256-b5a2-bdd28e10344b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:50 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:50.971112964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae1f438e-ab30-4256-b5a2-bdd28e10344b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:50 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:50.971338334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c860f12b782313847528f71dc53d137273459c49c9223226af699d85c131a426,PodSandboxId:d0802af3695afb88bd804fe90dd39104a08d28c3add88db3891bf1e1a7250971,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289124554817029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0d3c5972bb71d7775cfab7f70eb4db8a4612d02aa76b3bd17accecb53590184,PodSandboxId:c02a687fd4f03309860e8871ca5c5f63f46b4532db8c1e226aa33025cb9786d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289124500921618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8071041a53cc805d8a42cf000e83f8609f879e658352e7fc37cf343fc582b48b,PodSandboxId:7cde200b7b4bb9b8c24372710ec7f82cf7fff4d66a1fcabebe7f4f2d3be7e165,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289124462499241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5,PodSandboxId:7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289029231632570,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462,PodSandboxId:b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713289029093943705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4,PodSandboxId:b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713289029028083935,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552,PodSandboxId:b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713289016416371803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a43e7dc4ba35745d26de8ff0be2595,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae1f438e-ab30-4256-b5a2-bdd28e10344b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.014041219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3d32433-268e-4db0-995d-f03d6ee8194c name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.014145203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3d32433-268e-4db0-995d-f03d6ee8194c name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.017196952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=450f22f9-6aac-46e8-9066-1a889bcf554a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.017552162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289131017529155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=450f22f9-6aac-46e8-9066-1a889bcf554a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.018258006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=320aba09-c152-4c6a-867f-37a6b713aa7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.018312774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=320aba09-c152-4c6a-867f-37a6b713aa7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.018461163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c860f12b782313847528f71dc53d137273459c49c9223226af699d85c131a426,PodSandboxId:d0802af3695afb88bd804fe90dd39104a08d28c3add88db3891bf1e1a7250971,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289124554817029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0d3c5972bb71d7775cfab7f70eb4db8a4612d02aa76b3bd17accecb53590184,PodSandboxId:c02a687fd4f03309860e8871ca5c5f63f46b4532db8c1e226aa33025cb9786d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289124500921618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8071041a53cc805d8a42cf000e83f8609f879e658352e7fc37cf343fc582b48b,PodSandboxId:7cde200b7b4bb9b8c24372710ec7f82cf7fff4d66a1fcabebe7f4f2d3be7e165,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289124462499241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5,PodSandboxId:7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289029231632570,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462,PodSandboxId:b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713289029093943705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4,PodSandboxId:b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713289029028083935,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552,PodSandboxId:b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713289016416371803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a43e7dc4ba35745d26de8ff0be2595,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=320aba09-c152-4c6a-867f-37a6b713aa7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.073354522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cecad471-75cb-43b1-af2a-bd0165ee142d name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.073487278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cecad471-75cb-43b1-af2a-bd0165ee142d name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.075104015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1836a7ab-22d8-4cbd-9ff0-ab2e50d7b450 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.075647032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289131075615120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1836a7ab-22d8-4cbd-9ff0-ab2e50d7b450 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.076685566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afad068c-c7c3-438a-b774-67dd2a549725 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.076990020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afad068c-c7c3-438a-b774-67dd2a549725 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.077370049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c860f12b782313847528f71dc53d137273459c49c9223226af699d85c131a426,PodSandboxId:d0802af3695afb88bd804fe90dd39104a08d28c3add88db3891bf1e1a7250971,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289124554817029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0d3c5972bb71d7775cfab7f70eb4db8a4612d02aa76b3bd17accecb53590184,PodSandboxId:c02a687fd4f03309860e8871ca5c5f63f46b4532db8c1e226aa33025cb9786d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289124500921618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8071041a53cc805d8a42cf000e83f8609f879e658352e7fc37cf343fc582b48b,PodSandboxId:7cde200b7b4bb9b8c24372710ec7f82cf7fff4d66a1fcabebe7f4f2d3be7e165,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289124462499241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5,PodSandboxId:7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289029231632570,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462,PodSandboxId:b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713289029093943705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4,PodSandboxId:b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713289029028083935,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552,PodSandboxId:b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713289016416371803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a43e7dc4ba35745d26de8ff0be2595,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afad068c-c7c3-438a-b774-67dd2a549725 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.119589931Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9743dbbc-aceb-4ba6-8363-dd5477c844c7 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.119689746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9743dbbc-aceb-4ba6-8363-dd5477c844c7 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.120621431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76557721-0e75-4664-9ec8-92fe7dd4da30 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.121168447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289131121144351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76557721-0e75-4664-9ec8-92fe7dd4da30 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.121648276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51ebd2de-662f-4052-972f-7325066919df name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.121789559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51ebd2de-662f-4052-972f-7325066919df name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:51 kubernetes-upgrade-633875 crio[1938]: time="2024-04-16 17:38:51.121942732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c860f12b782313847528f71dc53d137273459c49c9223226af699d85c131a426,PodSandboxId:d0802af3695afb88bd804fe90dd39104a08d28c3add88db3891bf1e1a7250971,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289124554817029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0d3c5972bb71d7775cfab7f70eb4db8a4612d02aa76b3bd17accecb53590184,PodSandboxId:c02a687fd4f03309860e8871ca5c5f63f46b4532db8c1e226aa33025cb9786d8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289124500921618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8071041a53cc805d8a42cf000e83f8609f879e658352e7fc37cf343fc582b48b,PodSandboxId:7cde200b7b4bb9b8c24372710ec7f82cf7fff4d66a1fcabebe7f4f2d3be7e165,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289124462499241,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5,PodSandboxId:7d7342ba4361d4c9e227be060550c02f65fa980282bec355f426c766a21273fc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289029231632570,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 115373a09145343a060ea5d2d8311604,},Annotations:map[string]string{io.kubernetes.container.hash: 6217f75,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462,PodSandboxId:b631ec24f985966d4d5f3f6c8cf91358a8cad20fe15f008b06306f76096a67ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713289029093943705,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99f019b232accbb33fa16cc1df6908f,},Annotations:map[string]string{io.kubernetes.container.hash: fd70a4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4,PodSandboxId:b1196ced71c0ccf8fafe04f25835f4c671747318572fc39ecff0d5c82e1aa80a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713289029028083935,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfe9712396e09e330c5a7eb325febfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552,PodSandboxId:b212b991fc829744f99b3c79edcad0657c70221c89111767fd1f6d082ca365db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713289016416371803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-633875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a43e7dc4ba35745d26de8ff0be2595,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51ebd2de-662f-4052-972f-7325066919df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c860f12b78231       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   6 seconds ago        Running             kube-scheduler            2                   d0802af3695af       kube-scheduler-kubernetes-upgrade-633875
	d0d3c5972bb71       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   6 seconds ago        Running             kube-apiserver            2                   c02a687fd4f03       kube-apiserver-kubernetes-upgrade-633875
	8071041a53cc8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   6 seconds ago        Running             etcd                      2                   7cde200b7b4bb       etcd-kubernetes-upgrade-633875
	34f1c7b2476cd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      1                   7d7342ba4361d       etcd-kubernetes-upgrade-633875
	52bebea5be0df       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   About a minute ago   Exited              kube-apiserver            1                   b631ec24f9859       kube-apiserver-kubernetes-upgrade-633875
	064ba366dd5c8       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   About a minute ago   Exited              kube-scheduler            1                   b1196ced71c0c       kube-scheduler-kubernetes-upgrade-633875
	7fd8c91dbb7ae       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   About a minute ago   Exited              kube-controller-manager   0                   b212b991fc829       kube-controller-manager-kubernetes-upgrade-633875
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-633875
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-633875
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:36:59 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-633875
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:38:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:38:47 +0000   Tue, 16 Apr 2024 17:36:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:38:47 +0000   Tue, 16 Apr 2024 17:36:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:38:47 +0000   Tue, 16 Apr 2024 17:36:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:38:47 +0000   Tue, 16 Apr 2024 17:37:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    kubernetes-upgrade-633875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 46e9cfbc3b0a4cac9f73dd57cdfbe984
	  System UUID:                46e9cfbc-3b0a-4cac-9f73-dd57cdfbe984
	  Boot ID:                    2c32abc5-ed84-4d9f-979c-4ca87e0e12b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-633875                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-633875    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-kubernetes-upgrade-633875             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                400m (20%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  Starting                 116s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet  Node kubernetes-upgrade-633875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet  Node kubernetes-upgrade-633875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)  kubelet  Node kubernetes-upgrade-633875 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.638636] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.235115] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.061106] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064750] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.177618] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.169900] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.354326] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +5.282029] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[  +0.067628] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.281095] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[Apr16 17:37] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.098342] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.120603] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.650842] systemd-fstab-generator[1752]: Ignoring "noauto" option for root device
	[  +0.194108] systemd-fstab-generator[1789]: Ignoring "noauto" option for root device
	[  +0.261442] systemd-fstab-generator[1804]: Ignoring "noauto" option for root device
	[  +0.218286] systemd-fstab-generator[1818]: Ignoring "noauto" option for root device
	[  +0.378684] systemd-fstab-generator[1846]: Ignoring "noauto" option for root device
	[Apr16 17:38] systemd-fstab-generator[2025]: Ignoring "noauto" option for root device
	[  +0.070977] kauditd_printk_skb: 162 callbacks suppressed
	[  +1.880073] systemd-fstab-generator[2149]: Ignoring "noauto" option for root device
	[  +5.793866] systemd-fstab-generator[2526]: Ignoring "noauto" option for root device
	[  +0.087233] kauditd_printk_skb: 75 callbacks suppressed
	
	
	==> etcd [34f1c7b2476cd356af11d085aa5e701762c426978433a4907a29ce0b28bea6f5] <==
	{"level":"info","ts":"2024-04-16T17:37:09.730422Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"36.393386ms"}
	{"level":"info","ts":"2024-04-16T17:37:09.734053Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-16T17:37:09.814441Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","commit-index":307}
	{"level":"info","ts":"2024-04-16T17:37:09.826555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-16T17:37:09.830826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became follower at term 2"}
	{"level":"info","ts":"2024-04-16T17:37:09.830856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ba3e3e863cacc4d [peers: [], term: 2, commit: 307, applied: 0, lastindex: 307, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-16T17:37:09.87739Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-16T17:37:09.902614Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":300}
	{"level":"info","ts":"2024-04-16T17:37:09.908115Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-16T17:37:09.917542Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"ba3e3e863cacc4d","timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:37:09.921684Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"ba3e3e863cacc4d"}
	{"level":"info","ts":"2024-04-16T17:37:09.921876Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"ba3e3e863cacc4d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-16T17:37:09.944938Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-16T17:37:09.96212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005)"}
	{"level":"info","ts":"2024-04-16T17:37:09.963108Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","added-peer-id":"ba3e3e863cacc4d","added-peer-peer-urls":["https://192.168.39.149:2380"]}
	{"level":"info","ts":"2024-04-16T17:37:09.963244Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:37:09.963295Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:37:09.962861Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:37:09.982685Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:37:09.98271Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:37:09.988523Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:37:09.996863Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-04-16T17:37:09.99851Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-04-16T17:37:09.997698Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ba3e3e863cacc4d","initial-advertise-peer-urls":["https://192.168.39.149:2380"],"listen-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.149:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:37:10.00104Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [8071041a53cc805d8a42cf000e83f8609f879e658352e7fc37cf343fc582b48b] <==
	{"level":"info","ts":"2024-04-16T17:38:44.779941Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:38:44.779952Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:38:44.780243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d switched to configuration voters=(838764542867197005)"}
	{"level":"info","ts":"2024-04-16T17:38:44.780462Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","added-peer-id":"ba3e3e863cacc4d","added-peer-peer-urls":["https://192.168.39.149:2380"]}
	{"level":"info","ts":"2024-04-16T17:38:44.780622Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"65f5490397676253","local-member-id":"ba3e3e863cacc4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:38:44.780646Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:38:44.788704Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:38:44.789208Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ba3e3e863cacc4d","initial-advertise-peer-urls":["https://192.168.39.149:2380"],"listen-peer-urls":["https://192.168.39.149:2380"],"advertise-client-urls":["https://192.168.39.149:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.149:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:38:44.78926Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:38:44.78935Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-04-16T17:38:44.789358Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.149:2380"}
	{"level":"info","ts":"2024-04-16T17:38:45.762195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T17:38:45.762287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:38:45.762318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgPreVoteResp from ba3e3e863cacc4d at term 2"}
	{"level":"info","ts":"2024-04-16T17:38:45.762332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T17:38:45.762338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d received MsgVoteResp from ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-04-16T17:38:45.762346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba3e3e863cacc4d became leader at term 3"}
	{"level":"info","ts":"2024-04-16T17:38:45.762357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba3e3e863cacc4d elected leader ba3e3e863cacc4d at term 3"}
	{"level":"info","ts":"2024-04-16T17:38:45.766027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:38:45.766225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:38:45.766553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:38:45.766593Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:38:45.766029Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ba3e3e863cacc4d","local-member-attributes":"{Name:kubernetes-upgrade-633875 ClientURLs:[https://192.168.39.149:2379]}","request-path":"/0/members/ba3e3e863cacc4d/attributes","cluster-id":"65f5490397676253","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:38:45.768461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.149:2379"}
	{"level":"info","ts":"2024-04-16T17:38:45.768553Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 17:38:51 up 2 min,  0 users,  load average: 0.26, 0.18, 0.08
	Linux kubernetes-upgrade-633875 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [52bebea5be0dff7b5cf3e9cf8a0bdc633ebe3c6e8418d0b214bd4f02adc15462] <==
	I0416 17:37:09.670631       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:37:10.352650       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0416 17:37:10.360094       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0416 17:37:10.361634       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0416 17:37:10.361649       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0416 17:37:10.363828       1 instance.go:299] Using reconciler: lease
	W0416 17:37:10.744607       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:52328->127.0.0.1:2379: read: connection reset by peer"
	W0416 17:37:10.744847       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:52314->127.0.0.1:2379: read: connection reset by peer"
	W0416 17:37:10.745048       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:52318->127.0.0.1:2379: read: connection reset by peer"
	W0416 17:37:11.745797       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:11.745856       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:11.745897       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:13.247019       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:13.315151       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:13.327067       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:15.421915       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:15.655862       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:16.052135       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:19.825966       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:19.836455       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:19.863458       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:25.354604       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:26.015165       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:37:27.397466       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0416 17:37:30.364864       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d0d3c5972bb71d7775cfab7f70eb4db8a4612d02aa76b3bd17accecb53590184] <==
	I0416 17:38:47.157060       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 17:38:47.157100       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0416 17:38:47.251620       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0416 17:38:47.251703       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:38:47.251713       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:38:47.252282       1 shared_informer.go:320] Caches are synced for configmaps
	I0416 17:38:47.256318       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0416 17:38:47.257146       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0416 17:38:47.259172       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:38:47.259200       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:38:47.259223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:38:47.259246       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:38:47.263827       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0416 17:38:47.263909       1 policy_source.go:224] refreshing policies
	E0416 17:38:47.285024       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0416 17:38:47.331356       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 17:38:47.333995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 17:38:47.340889       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0416 17:38:47.361281       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:38:48.136917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:38:48.958885       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0416 17:38:48.971017       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0416 17:38:48.998873       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0416 17:38:49.138897       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:38:49.146324       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552] <==
	I0416 17:37:04.843598       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0416 17:37:04.843644       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0416 17:37:04.843808       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0416 17:37:04.843845       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0416 17:37:04.995006       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0416 17:37:04.995068       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0416 17:37:04.995081       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0416 17:37:05.042063       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0416 17:37:05.042148       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0416 17:37:05.042166       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0416 17:37:05.042183       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0416 17:37:05.194696       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0416 17:37:05.194914       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0416 17:37:05.194956       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0416 17:37:05.345237       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0416 17:37:05.345449       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0416 17:37:05.494071       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0416 17:37:05.494161       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0416 17:37:05.494303       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0416 17:37:05.645583       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0416 17:37:05.645946       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0416 17:37:05.645999       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0416 17:37:05.793948       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0416 17:37:05.794052       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0416 17:37:05.794064       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	
	
	==> kube-scheduler [064ba366dd5c8d8c51c97527fe9745f831ac4cf97fecd0313d3f5732e85193e4] <==
	I0416 17:37:10.843267       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:37:21.290955       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.149:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0416 17:37:21.291079       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:37:21.291092       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:37:31.381160       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.2"
	I0416 17:37:31.381194       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:37:31.383581       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:37:31.383606       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 17:37:31.383621       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:37:31.383628       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:37:31.384130       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0416 17:37:31.384241       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0416 17:37:31.384368       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:37:31.384381       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0416 17:37:31.384437       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c860f12b782313847528f71dc53d137273459c49c9223226af699d85c131a426] <==
	I0416 17:38:45.199337       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:38:47.184821       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 17:38:47.184870       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:38:47.184896       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:38:47.184902       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:38:47.226123       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.2"
	I0416 17:38:47.226175       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:38:47.231880       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:38:47.232038       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:38:47.234696       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 17:38:47.235953       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:38:47.332821       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:44.008681    2156 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-633875"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.009935    2156 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.149:8443: connect: connection refused" node="kubernetes-upgrade-633875"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.308991    2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-633875?timeout=10s\": dial tcp 192.168.39.149:8443: connect: connection refused" interval="800ms"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:44.411521    2156 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-633875"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.414335    2156 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.149:8443: connect: connection refused" node="kubernetes-upgrade-633875"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.463575    2156 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="aefcab41c2e4c904c7e1eed2c8adb500a4839aa2bedc5d8c6dfb250f290c22a0"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.463875    2156 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.0-rc.2,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service
-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,
},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-6
33875_kube-system(d6a43e7dc4ba35745d26de8ff0be2595): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.463941    2156 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\\\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-633875" podUID="d6a43e7dc4ba35745d26de8ff0be2595"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:44.833876    2156 scope.go:117] "RemoveContainer" containerID="7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.847892    2156 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="aefcab41c2e4c904c7e1eed2c8adb500a4839aa2bedc5d8c6dfb250f290c22a0"
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.848048    2156 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.0-rc.2,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service
-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,
},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-6
33875_kube-system(d6a43e7dc4ba35745d26de8ff0be2595): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use
	Apr 16 17:38:44 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:44.848085    2156 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\\\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-633875" podUID="d6a43e7dc4ba35745d26de8ff0be2595"
	Apr 16 17:38:45 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:45.216624    2156 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-633875"
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:47.289529    2156 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-633875"
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:47.290073    2156 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-633875"
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:47.691176    2156 apiserver.go:52] "Watching apiserver"
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:47.694713    2156 scope.go:117] "RemoveContainer" containerID="7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552"
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:47.703233    2156 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="aefcab41c2e4c904c7e1eed2c8adb500a4839aa2bedc5d8c6dfb250f290c22a0"
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:47.703434    2156 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.0-rc.2,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service
-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,
},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-6
33875_kube-system(d6a43e7dc4ba35745d26de8ff0be2595): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:47.703500    2156 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\\\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-633875" podUID="d6a43e7dc4ba35745d26de8ff0be2595"
	Apr 16 17:38:47 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:47.705068    2156 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 16 17:38:50 kubernetes-upgrade-633875 kubelet[2156]: I0416 17:38:50.322004    2156 scope.go:117] "RemoveContainer" containerID="7fd8c91dbb7ae450e6644741e319d6159a6d3054c72c691fd7d341051df15552"
	Apr 16 17:38:50 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:50.333187    2156 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="aefcab41c2e4c904c7e1eed2c8adb500a4839aa2bedc5d8c6dfb250f290c22a0"
	Apr 16 17:38:50 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:50.333367    2156 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.0-rc.2,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service
-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,
},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-6
33875_kube-system(d6a43e7dc4ba35745d26de8ff0be2595): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use
	Apr 16 17:38:50 kubernetes-upgrade-633875 kubelet[2156]: E0416 17:38:50.333418    2156 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-633875_kube-system_d6a43e7dc4ba35745d26de8ff0be2595_1\\\" is already in use by 0d98a36e3f6692f9b56e8735d5e78fc46b496de27ddda89bda35abb168fdedf3. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-633875" podUID="d6a43e7dc4ba35745d26de8ff0be2595"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-633875 -n kubernetes-upgrade-633875
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-633875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-633875 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-633875 describe pod storage-provisioner: exit status 1 (63.522507ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-633875 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-633875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-633875
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-633875: (1.10239812s)
--- FAIL: TestKubernetesUpgrade (467.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (300.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-795352 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-795352 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m0.551620821s)

                                                
                                                
-- stdout --
	* [old-k8s-version-795352] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-795352" primary control-plane node in "old-k8s-version-795352" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:22:55.463708   45115 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:22:55.463948   45115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:22:55.463957   45115 out.go:304] Setting ErrFile to fd 2...
	I0416 17:22:55.463962   45115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:22:55.464181   45115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:22:55.464711   45115 out.go:298] Setting JSON to false
	I0416 17:22:55.465557   45115 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3927,"bootTime":1713284248,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:22:55.465613   45115 start.go:139] virtualization: kvm guest
	I0416 17:22:55.467925   45115 out.go:177] * [old-k8s-version-795352] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:22:55.470724   45115 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:22:55.470072   45115 notify.go:220] Checking for updates...
	I0416 17:22:55.473219   45115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:22:55.476792   45115 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:22:55.478055   45115 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:22:55.479284   45115 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:22:55.480727   45115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:22:55.482416   45115 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:22:55.524012   45115 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:22:55.525336   45115 start.go:297] selected driver: kvm2
	I0416 17:22:55.525349   45115 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:22:55.525361   45115 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:22:55.526328   45115 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:22:55.545077   45115 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:22:55.562814   45115 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:22:55.562856   45115 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:22:55.563053   45115 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:22:55.563123   45115 cni.go:84] Creating CNI manager for ""
	I0416 17:22:55.563137   45115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:22:55.563145   45115 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 17:22:55.563194   45115 start.go:340] cluster config:
	{Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:22:55.563284   45115 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:22:55.565021   45115 out.go:177] * Starting "old-k8s-version-795352" primary control-plane node in "old-k8s-version-795352" cluster
	I0416 17:22:55.566384   45115 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 17:22:55.566429   45115 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 17:22:55.566448   45115 cache.go:56] Caching tarball of preloaded images
	I0416 17:22:55.566644   45115 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:22:55.566677   45115 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 17:22:55.567247   45115 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/config.json ...
	I0416 17:22:55.567312   45115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/config.json: {Name:mk89a4d870c1a1e3739526ed4481fded6f20b765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:22:55.567504   45115 start.go:360] acquireMachinesLock for old-k8s-version-795352: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:23:20.254695   45115 start.go:364] duration metric: took 24.687127936s to acquireMachinesLock for "old-k8s-version-795352"
	I0416 17:23:20.254751   45115 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:23:20.254938   45115 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 17:23:20.257350   45115 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 17:23:20.257600   45115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:23:20.257653   45115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:23:20.274089   45115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41253
	I0416 17:23:20.274568   45115 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:23:20.275146   45115 main.go:141] libmachine: Using API Version  1
	I0416 17:23:20.275163   45115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:23:20.275477   45115 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:23:20.275683   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetMachineName
	I0416 17:23:20.275835   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:20.276008   45115 start.go:159] libmachine.API.Create for "old-k8s-version-795352" (driver="kvm2")
	I0416 17:23:20.276037   45115 client.go:168] LocalClient.Create starting
	I0416 17:23:20.276069   45115 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 17:23:20.276110   45115 main.go:141] libmachine: Decoding PEM data...
	I0416 17:23:20.276144   45115 main.go:141] libmachine: Parsing certificate...
	I0416 17:23:20.276209   45115 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 17:23:20.276234   45115 main.go:141] libmachine: Decoding PEM data...
	I0416 17:23:20.276249   45115 main.go:141] libmachine: Parsing certificate...
	I0416 17:23:20.276278   45115 main.go:141] libmachine: Running pre-create checks...
	I0416 17:23:20.276291   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .PreCreateCheck
	I0416 17:23:20.276721   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetConfigRaw
	I0416 17:23:20.277160   45115 main.go:141] libmachine: Creating machine...
	I0416 17:23:20.277177   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .Create
	I0416 17:23:20.277327   45115 main.go:141] libmachine: (old-k8s-version-795352) Creating KVM machine...
	I0416 17:23:20.278339   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found existing default KVM network
	I0416 17:23:20.279039   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:20.278884   45444 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fb:82:0f} reservation:<nil>}
	I0416 17:23:20.279692   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:20.279611   45444 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fa50}
	I0416 17:23:20.279726   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | created network xml: 
	I0416 17:23:20.279747   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | <network>
	I0416 17:23:20.279763   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |   <name>mk-old-k8s-version-795352</name>
	I0416 17:23:20.279778   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |   <dns enable='no'/>
	I0416 17:23:20.279790   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |   
	I0416 17:23:20.279802   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0416 17:23:20.279826   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |     <dhcp>
	I0416 17:23:20.279839   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0416 17:23:20.279855   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |     </dhcp>
	I0416 17:23:20.279867   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |   </ip>
	I0416 17:23:20.279876   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG |   
	I0416 17:23:20.279887   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | </network>
	I0416 17:23:20.279901   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | 
	I0416 17:23:20.285194   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | trying to create private KVM network mk-old-k8s-version-795352 192.168.50.0/24...
	I0416 17:23:20.354561   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | private KVM network mk-old-k8s-version-795352 192.168.50.0/24 created
	I0416 17:23:20.354638   45115 main.go:141] libmachine: (old-k8s-version-795352) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352 ...
	I0416 17:23:20.354668   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:20.354529   45444 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:23:20.354686   45115 main.go:141] libmachine: (old-k8s-version-795352) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 17:23:20.354902   45115 main.go:141] libmachine: (old-k8s-version-795352) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:23:20.572903   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:20.572801   45444 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa...
	I0416 17:23:20.669590   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:20.669472   45444 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/old-k8s-version-795352.rawdisk...
	I0416 17:23:20.669626   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Writing magic tar header
	I0416 17:23:20.669700   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Writing SSH key tar header
	I0416 17:23:20.669731   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:20.669589   45444 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352 ...
	I0416 17:23:20.669773   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352
	I0416 17:23:20.669801   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 17:23:20.669815   45115 main.go:141] libmachine: (old-k8s-version-795352) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352 (perms=drwx------)
	I0416 17:23:20.669839   45115 main.go:141] libmachine: (old-k8s-version-795352) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 17:23:20.669854   45115 main.go:141] libmachine: (old-k8s-version-795352) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 17:23:20.669868   45115 main.go:141] libmachine: (old-k8s-version-795352) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 17:23:20.669880   45115 main.go:141] libmachine: (old-k8s-version-795352) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 17:23:20.669894   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:23:20.669907   45115 main.go:141] libmachine: (old-k8s-version-795352) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 17:23:20.669924   45115 main.go:141] libmachine: (old-k8s-version-795352) Creating domain...
	I0416 17:23:20.669938   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 17:23:20.669967   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 17:23:20.669992   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Checking permissions on dir: /home/jenkins
	I0416 17:23:20.670006   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Checking permissions on dir: /home
	I0416 17:23:20.670022   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Skipping /home - not owner
	I0416 17:23:20.670927   45115 main.go:141] libmachine: (old-k8s-version-795352) define libvirt domain using xml: 
	I0416 17:23:20.670951   45115 main.go:141] libmachine: (old-k8s-version-795352) <domain type='kvm'>
	I0416 17:23:20.670962   45115 main.go:141] libmachine: (old-k8s-version-795352)   <name>old-k8s-version-795352</name>
	I0416 17:23:20.670970   45115 main.go:141] libmachine: (old-k8s-version-795352)   <memory unit='MiB'>2200</memory>
	I0416 17:23:20.670982   45115 main.go:141] libmachine: (old-k8s-version-795352)   <vcpu>2</vcpu>
	I0416 17:23:20.670993   45115 main.go:141] libmachine: (old-k8s-version-795352)   <features>
	I0416 17:23:20.671001   45115 main.go:141] libmachine: (old-k8s-version-795352)     <acpi/>
	I0416 17:23:20.671012   45115 main.go:141] libmachine: (old-k8s-version-795352)     <apic/>
	I0416 17:23:20.671035   45115 main.go:141] libmachine: (old-k8s-version-795352)     <pae/>
	I0416 17:23:20.671046   45115 main.go:141] libmachine: (old-k8s-version-795352)     
	I0416 17:23:20.671055   45115 main.go:141] libmachine: (old-k8s-version-795352)   </features>
	I0416 17:23:20.671066   45115 main.go:141] libmachine: (old-k8s-version-795352)   <cpu mode='host-passthrough'>
	I0416 17:23:20.671073   45115 main.go:141] libmachine: (old-k8s-version-795352)   
	I0416 17:23:20.671082   45115 main.go:141] libmachine: (old-k8s-version-795352)   </cpu>
	I0416 17:23:20.671091   45115 main.go:141] libmachine: (old-k8s-version-795352)   <os>
	I0416 17:23:20.671114   45115 main.go:141] libmachine: (old-k8s-version-795352)     <type>hvm</type>
	I0416 17:23:20.671127   45115 main.go:141] libmachine: (old-k8s-version-795352)     <boot dev='cdrom'/>
	I0416 17:23:20.671137   45115 main.go:141] libmachine: (old-k8s-version-795352)     <boot dev='hd'/>
	I0416 17:23:20.671160   45115 main.go:141] libmachine: (old-k8s-version-795352)     <bootmenu enable='no'/>
	I0416 17:23:20.671169   45115 main.go:141] libmachine: (old-k8s-version-795352)   </os>
	I0416 17:23:20.671178   45115 main.go:141] libmachine: (old-k8s-version-795352)   <devices>
	I0416 17:23:20.671190   45115 main.go:141] libmachine: (old-k8s-version-795352)     <disk type='file' device='cdrom'>
	I0416 17:23:20.671208   45115 main.go:141] libmachine: (old-k8s-version-795352)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/boot2docker.iso'/>
	I0416 17:23:20.671220   45115 main.go:141] libmachine: (old-k8s-version-795352)       <target dev='hdc' bus='scsi'/>
	I0416 17:23:20.671232   45115 main.go:141] libmachine: (old-k8s-version-795352)       <readonly/>
	I0416 17:23:20.671242   45115 main.go:141] libmachine: (old-k8s-version-795352)     </disk>
	I0416 17:23:20.671251   45115 main.go:141] libmachine: (old-k8s-version-795352)     <disk type='file' device='disk'>
	I0416 17:23:20.671273   45115 main.go:141] libmachine: (old-k8s-version-795352)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 17:23:20.671292   45115 main.go:141] libmachine: (old-k8s-version-795352)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/old-k8s-version-795352.rawdisk'/>
	I0416 17:23:20.671303   45115 main.go:141] libmachine: (old-k8s-version-795352)       <target dev='hda' bus='virtio'/>
	I0416 17:23:20.671313   45115 main.go:141] libmachine: (old-k8s-version-795352)     </disk>
	I0416 17:23:20.671325   45115 main.go:141] libmachine: (old-k8s-version-795352)     <interface type='network'>
	I0416 17:23:20.671337   45115 main.go:141] libmachine: (old-k8s-version-795352)       <source network='mk-old-k8s-version-795352'/>
	I0416 17:23:20.671347   45115 main.go:141] libmachine: (old-k8s-version-795352)       <model type='virtio'/>
	I0416 17:23:20.671360   45115 main.go:141] libmachine: (old-k8s-version-795352)     </interface>
	I0416 17:23:20.671372   45115 main.go:141] libmachine: (old-k8s-version-795352)     <interface type='network'>
	I0416 17:23:20.671385   45115 main.go:141] libmachine: (old-k8s-version-795352)       <source network='default'/>
	I0416 17:23:20.671393   45115 main.go:141] libmachine: (old-k8s-version-795352)       <model type='virtio'/>
	I0416 17:23:20.671404   45115 main.go:141] libmachine: (old-k8s-version-795352)     </interface>
	I0416 17:23:20.671416   45115 main.go:141] libmachine: (old-k8s-version-795352)     <serial type='pty'>
	I0416 17:23:20.671426   45115 main.go:141] libmachine: (old-k8s-version-795352)       <target port='0'/>
	I0416 17:23:20.671430   45115 main.go:141] libmachine: (old-k8s-version-795352)     </serial>
	I0416 17:23:20.671443   45115 main.go:141] libmachine: (old-k8s-version-795352)     <console type='pty'>
	I0416 17:23:20.671455   45115 main.go:141] libmachine: (old-k8s-version-795352)       <target type='serial' port='0'/>
	I0416 17:23:20.671468   45115 main.go:141] libmachine: (old-k8s-version-795352)     </console>
	I0416 17:23:20.671479   45115 main.go:141] libmachine: (old-k8s-version-795352)     <rng model='virtio'>
	I0416 17:23:20.671492   45115 main.go:141] libmachine: (old-k8s-version-795352)       <backend model='random'>/dev/random</backend>
	I0416 17:23:20.671502   45115 main.go:141] libmachine: (old-k8s-version-795352)     </rng>
	I0416 17:23:20.671510   45115 main.go:141] libmachine: (old-k8s-version-795352)     
	I0416 17:23:20.671518   45115 main.go:141] libmachine: (old-k8s-version-795352)     
	I0416 17:23:20.671526   45115 main.go:141] libmachine: (old-k8s-version-795352)   </devices>
	I0416 17:23:20.671545   45115 main.go:141] libmachine: (old-k8s-version-795352) </domain>
	I0416 17:23:20.671559   45115 main.go:141] libmachine: (old-k8s-version-795352) 
	I0416 17:23:20.678205   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:1e:4f:fb in network default
	I0416 17:23:20.678768   45115 main.go:141] libmachine: (old-k8s-version-795352) Ensuring networks are active...
	I0416 17:23:20.678793   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:20.679429   45115 main.go:141] libmachine: (old-k8s-version-795352) Ensuring network default is active
	I0416 17:23:20.679754   45115 main.go:141] libmachine: (old-k8s-version-795352) Ensuring network mk-old-k8s-version-795352 is active
	I0416 17:23:20.680263   45115 main.go:141] libmachine: (old-k8s-version-795352) Getting domain xml...
	I0416 17:23:20.680914   45115 main.go:141] libmachine: (old-k8s-version-795352) Creating domain...
	I0416 17:23:21.902060   45115 main.go:141] libmachine: (old-k8s-version-795352) Waiting to get IP...
	I0416 17:23:21.902938   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:21.903322   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:21.903348   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:21.903294   45444 retry.go:31] will retry after 241.214853ms: waiting for machine to come up
	I0416 17:23:22.146678   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:22.147353   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:22.147382   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:22.147316   45444 retry.go:31] will retry after 323.420251ms: waiting for machine to come up
	I0416 17:23:22.472617   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:22.473102   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:22.473136   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:22.473052   45444 retry.go:31] will retry after 389.35714ms: waiting for machine to come up
	I0416 17:23:22.863719   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:22.864199   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:22.864228   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:22.864171   45444 retry.go:31] will retry after 453.077921ms: waiting for machine to come up
	I0416 17:23:23.318656   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:23.319140   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:23.319788   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:23.319549   45444 retry.go:31] will retry after 634.472504ms: waiting for machine to come up
	I0416 17:23:23.955531   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:23.956023   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:23.956055   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:23.955968   45444 retry.go:31] will retry after 816.750123ms: waiting for machine to come up
	I0416 17:23:24.774026   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:24.774440   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:24.774472   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:24.774368   45444 retry.go:31] will retry after 1.191365295s: waiting for machine to come up
	I0416 17:23:25.967334   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:25.967804   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:25.967831   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:25.967760   45444 retry.go:31] will retry after 1.241211843s: waiting for machine to come up
	I0416 17:23:27.210290   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:27.210738   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:27.210766   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:27.210691   45444 retry.go:31] will retry after 1.81587298s: waiting for machine to come up
	I0416 17:23:29.027740   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:29.028090   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:29.028115   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:29.028040   45444 retry.go:31] will retry after 2.232078129s: waiting for machine to come up
	I0416 17:23:31.262075   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:31.262559   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:31.262587   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:31.262506   45444 retry.go:31] will retry after 2.762769559s: waiting for machine to come up
	I0416 17:23:34.027358   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:34.027853   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:34.027888   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:34.027797   45444 retry.go:31] will retry after 3.526948951s: waiting for machine to come up
	I0416 17:23:37.557727   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:37.558206   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:37.558233   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:37.558161   45444 retry.go:31] will retry after 3.837672285s: waiting for machine to come up
	I0416 17:23:41.397357   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:41.397708   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:23:41.397748   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:23:41.397665   45444 retry.go:31] will retry after 4.576680968s: waiting for machine to come up
	I0416 17:23:45.976191   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:45.976671   45115 main.go:141] libmachine: (old-k8s-version-795352) Found IP for machine: 192.168.50.168
	I0416 17:23:45.976697   45115 main.go:141] libmachine: (old-k8s-version-795352) Reserving static IP address...
	I0416 17:23:45.976725   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has current primary IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:45.977188   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-795352", mac: "52:54:00:9a:48:0e", ip: "192.168.50.168"} in network mk-old-k8s-version-795352
	I0416 17:23:46.048325   45115 main.go:141] libmachine: (old-k8s-version-795352) Reserved static IP address: 192.168.50.168
	I0416 17:23:46.048352   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Getting to WaitForSSH function...
	I0416 17:23:46.048362   45115 main.go:141] libmachine: (old-k8s-version-795352) Waiting for SSH to be available...
	I0416 17:23:46.050852   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.051242   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.051275   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.051399   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Using SSH client type: external
	I0416 17:23:46.051427   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa (-rw-------)
	I0416 17:23:46.051472   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:23:46.051486   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | About to run SSH command:
	I0416 17:23:46.051505   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | exit 0
	I0416 17:23:46.181012   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | SSH cmd err, output: <nil>: 
	I0416 17:23:46.181279   45115 main.go:141] libmachine: (old-k8s-version-795352) KVM machine creation complete!
	I0416 17:23:46.181657   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetConfigRaw
	I0416 17:23:46.182263   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:46.182449   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:46.182629   45115 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 17:23:46.182643   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetState
	I0416 17:23:46.184002   45115 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 17:23:46.184016   45115 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 17:23:46.184021   45115 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 17:23:46.184027   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:46.186775   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.187200   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.187221   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.187375   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:46.187552   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.187700   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.187853   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:46.188012   45115 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:46.188194   45115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:23:46.188205   45115 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 17:23:46.288086   45115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:23:46.288112   45115 main.go:141] libmachine: Detecting the provisioner...
	I0416 17:23:46.288123   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:46.290679   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.290963   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.290992   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.291092   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:46.291272   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.291444   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.291626   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:46.291801   45115 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:46.291988   45115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:23:46.292003   45115 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 17:23:46.393674   45115 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 17:23:46.393766   45115 main.go:141] libmachine: found compatible host: buildroot
	I0416 17:23:46.393781   45115 main.go:141] libmachine: Provisioning with buildroot...
	I0416 17:23:46.393795   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetMachineName
	I0416 17:23:46.394049   45115 buildroot.go:166] provisioning hostname "old-k8s-version-795352"
	I0416 17:23:46.394081   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetMachineName
	I0416 17:23:46.394286   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:46.397195   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.397586   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.397608   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.397750   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:46.397905   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.398062   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.398228   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:46.398373   45115 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:46.398570   45115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:23:46.398589   45115 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-795352 && echo "old-k8s-version-795352" | sudo tee /etc/hostname
	I0416 17:23:46.512190   45115 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-795352
	
	I0416 17:23:46.512219   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:46.514829   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.515058   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.515095   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.515360   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:46.515569   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.515722   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.515868   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:46.516030   45115 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:46.516201   45115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:23:46.516219   45115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-795352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-795352/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-795352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:23:46.630472   45115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:23:46.630510   45115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:23:46.630534   45115 buildroot.go:174] setting up certificates
	I0416 17:23:46.630545   45115 provision.go:84] configureAuth start
	I0416 17:23:46.630554   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetMachineName
	I0416 17:23:46.630850   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:23:46.633619   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.633966   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.633998   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.634164   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:46.636655   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.637042   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.637075   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.637229   45115 provision.go:143] copyHostCerts
	I0416 17:23:46.637315   45115 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:23:46.637337   45115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:23:46.637400   45115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:23:46.637512   45115 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:23:46.637525   45115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:23:46.637555   45115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:23:46.637641   45115 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:23:46.637652   45115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:23:46.637679   45115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:23:46.637756   45115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-795352 san=[127.0.0.1 192.168.50.168 localhost minikube old-k8s-version-795352]
	I0416 17:23:46.811463   45115 provision.go:177] copyRemoteCerts
	I0416 17:23:46.811515   45115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:23:46.811541   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:46.814225   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.814515   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.814537   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.814791   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:46.814970   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.815120   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:46.815243   45115 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:23:46.896187   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:23:46.924847   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 17:23:46.951834   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:23:46.978200   45115 provision.go:87] duration metric: took 347.642793ms to configureAuth
	I0416 17:23:46.978225   45115 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:23:46.978385   45115 config.go:182] Loaded profile config "old-k8s-version-795352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 17:23:46.978449   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:46.981033   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.981333   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:46.981361   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:46.981513   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:46.981679   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.981849   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:46.981975   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:46.982123   45115 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:46.982287   45115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:23:46.982312   45115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:23:47.254897   45115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:23:47.254924   45115 main.go:141] libmachine: Checking connection to Docker...
	I0416 17:23:47.254935   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetURL
	I0416 17:23:47.256309   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | Using libvirt version 6000000
	I0416 17:23:47.258278   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.258537   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:47.258568   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.258687   45115 main.go:141] libmachine: Docker is up and running!
	I0416 17:23:47.258702   45115 main.go:141] libmachine: Reticulating splines...
	I0416 17:23:47.258708   45115 client.go:171] duration metric: took 26.982661097s to LocalClient.Create
	I0416 17:23:47.258741   45115 start.go:167] duration metric: took 26.982726138s to libmachine.API.Create "old-k8s-version-795352"
	I0416 17:23:47.258751   45115 start.go:293] postStartSetup for "old-k8s-version-795352" (driver="kvm2")
	I0416 17:23:47.258769   45115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:23:47.258785   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:47.259025   45115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:23:47.259048   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:47.261279   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.261631   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:47.261677   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.261824   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:47.262001   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:47.262159   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:47.262302   45115 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:23:47.345554   45115 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:23:47.350508   45115 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:23:47.350529   45115 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:23:47.350581   45115 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:23:47.350660   45115 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:23:47.350771   45115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:23:47.361846   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:23:47.387474   45115 start.go:296] duration metric: took 128.704914ms for postStartSetup
	I0416 17:23:47.387521   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetConfigRaw
	I0416 17:23:47.388048   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:23:47.390439   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.390838   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:47.390865   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.391092   45115 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/config.json ...
	I0416 17:23:47.391275   45115 start.go:128] duration metric: took 27.136320534s to createHost
	I0416 17:23:47.391298   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:47.393689   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.394055   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:47.394085   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.394165   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:47.394342   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:47.394526   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:47.394702   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:47.394908   45115 main.go:141] libmachine: Using SSH client type: native
	I0416 17:23:47.395101   45115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:23:47.395116   45115 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 17:23:47.498479   45115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713288227.482971805
	
	I0416 17:23:47.498515   45115 fix.go:216] guest clock: 1713288227.482971805
	I0416 17:23:47.498525   45115 fix.go:229] Guest: 2024-04-16 17:23:47.482971805 +0000 UTC Remote: 2024-04-16 17:23:47.391287239 +0000 UTC m=+51.986466631 (delta=91.684566ms)
	I0416 17:23:47.498563   45115 fix.go:200] guest clock delta is within tolerance: 91.684566ms
	I0416 17:23:47.498568   45115 start.go:83] releasing machines lock for "old-k8s-version-795352", held for 27.243847024s
	I0416 17:23:47.498596   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:47.498868   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:23:47.501818   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.502236   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:47.502266   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.502466   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:47.502967   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:47.503171   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:23:47.503281   45115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:23:47.503344   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:47.503410   45115 ssh_runner.go:195] Run: cat /version.json
	I0416 17:23:47.503435   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:23:47.506167   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.506519   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:47.506544   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.506563   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.506703   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:47.506874   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:47.507057   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:47.507104   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:47.507130   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:47.507203   45115 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:23:47.507293   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:23:47.507444   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:23:47.507589   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:23:47.507719   45115 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:23:47.607920   45115 ssh_runner.go:195] Run: systemctl --version
	I0416 17:23:47.615130   45115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:23:47.787000   45115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:23:47.794621   45115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:23:47.794702   45115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:23:47.812947   45115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:23:47.812974   45115 start.go:494] detecting cgroup driver to use...
	I0416 17:23:47.813041   45115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:23:47.831639   45115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:23:47.849154   45115 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:23:47.849198   45115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:23:47.869770   45115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:23:47.887809   45115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:23:48.006494   45115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:23:48.180331   45115 docker.go:233] disabling docker service ...
	I0416 17:23:48.180412   45115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:23:48.199547   45115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:23:48.216315   45115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:23:48.370026   45115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:23:48.526678   45115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:23:48.554729   45115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:23:48.582608   45115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 17:23:48.582673   45115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:23:48.599249   45115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:23:48.599309   45115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:23:48.616156   45115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:23:48.632068   45115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:23:48.646199   45115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:23:48.662343   45115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:23:48.674657   45115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:23:48.674721   45115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:23:48.691274   45115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:23:48.703350   45115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:23:48.866853   45115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:23:49.011954   45115 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:23:49.012014   45115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:23:49.017953   45115 start.go:562] Will wait 60s for crictl version
	I0416 17:23:49.018015   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:49.022289   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:23:49.069223   45115 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:23:49.069307   45115 ssh_runner.go:195] Run: crio --version
	I0416 17:23:49.110257   45115 ssh_runner.go:195] Run: crio --version
	I0416 17:23:49.155652   45115 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 17:23:49.156993   45115 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:23:49.160514   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:49.160947   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:23:36 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:23:49.161001   45115 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:23:49.161183   45115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 17:23:49.168396   45115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:23:49.185445   45115 kubeadm.go:877] updating cluster {Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:23:49.185583   45115 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 17:23:49.185679   45115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:23:49.229817   45115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 17:23:49.229909   45115 ssh_runner.go:195] Run: which lz4
	I0416 17:23:49.234552   45115 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0416 17:23:49.240019   45115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:23:49.240047   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 17:23:51.430296   45115 crio.go:462] duration metric: took 2.195768121s to copy over tarball
	I0416 17:23:51.430377   45115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:23:54.757140   45115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.326732555s)
	I0416 17:23:54.757190   45115 crio.go:469] duration metric: took 3.326861057s to extract the tarball
	I0416 17:23:54.757200   45115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:23:54.804344   45115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:23:54.856493   45115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 17:23:54.856520   45115 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 17:23:54.856589   45115 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:23:54.856641   45115 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:23:54.856682   45115 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:23:54.856694   45115 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:23:54.856732   45115 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:23:54.856796   45115 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 17:23:54.856804   45115 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 17:23:54.856590   45115 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:23:54.858987   45115 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 17:23:54.859001   45115 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:23:54.859019   45115 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:23:54.858991   45115 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:23:54.858992   45115 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:23:54.859048   45115 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:23:54.859154   45115 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 17:23:54.859305   45115 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:23:55.011843   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 17:23:55.016406   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:23:55.041770   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 17:23:55.047823   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:23:55.048183   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:23:55.064333   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 17:23:55.093822   45115 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 17:23:55.093865   45115 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 17:23:55.093916   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:55.096939   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:23:55.140552   45115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:23:55.144318   45115 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 17:23:55.144364   45115 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:23:55.144410   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:55.256982   45115 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 17:23:55.257035   45115 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:23:55.257084   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:55.257182   45115 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 17:23:55.257209   45115 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:23:55.257243   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:55.257325   45115 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 17:23:55.257350   45115 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:23:55.257383   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:55.257446   45115 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 17:23:55.257473   45115 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 17:23:55.257496   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:55.257551   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 17:23:55.293752   45115 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 17:23:55.293802   45115 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:23:55.293851   45115 ssh_runner.go:195] Run: which crictl
	I0416 17:23:55.402637   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:23:55.402708   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:23:55.402748   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 17:23:55.402788   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:23:55.402828   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 17:23:55.402869   45115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 17:23:55.402888   45115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:23:55.548974   45115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 17:23:55.563200   45115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 17:23:55.563257   45115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 17:23:55.563293   45115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 17:23:55.563342   45115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 17:23:55.563356   45115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 17:23:55.563405   45115 cache_images.go:92] duration metric: took 706.871542ms to LoadCachedImages
	W0416 17:23:55.563463   45115 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0416 17:23:55.563479   45115 kubeadm.go:928] updating node { 192.168.50.168 8443 v1.20.0 crio true true} ...
	I0416 17:23:55.563594   45115 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-795352 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:23:55.563666   45115 ssh_runner.go:195] Run: crio config
	I0416 17:23:55.622294   45115 cni.go:84] Creating CNI manager for ""
	I0416 17:23:55.622318   45115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:23:55.622326   45115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:23:55.622345   45115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.168 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-795352 NodeName:old-k8s-version-795352 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 17:23:55.622483   45115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-795352"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:23:55.622539   45115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 17:23:55.634259   45115 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:23:55.634333   45115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:23:55.645243   45115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0416 17:23:55.665692   45115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:23:55.686738   45115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0416 17:23:55.707076   45115 ssh_runner.go:195] Run: grep 192.168.50.168	control-plane.minikube.internal$ /etc/hosts
	I0416 17:23:55.711858   45115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:23:55.725927   45115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:23:55.878089   45115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:23:55.899823   45115 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352 for IP: 192.168.50.168
	I0416 17:23:55.899848   45115 certs.go:194] generating shared ca certs ...
	I0416 17:23:55.899869   45115 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:23:55.900037   45115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:23:55.900090   45115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:23:55.900102   45115 certs.go:256] generating profile certs ...
	I0416 17:23:55.900168   45115 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.key
	I0416 17:23:55.900206   45115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt with IP's: []
	I0416 17:23:56.004591   45115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt ...
	I0416 17:23:56.004623   45115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: {Name:mk3f82dfe2f09d66fe8d85a0719bf8462ddd47ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:23:56.004816   45115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.key ...
	I0416 17:23:56.004833   45115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.key: {Name:mkb6ee3b86351e698301059c8fa3eed3993c8715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:23:56.004968   45115 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key.8f51567a
	I0416 17:23:56.004994   45115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.crt.8f51567a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.168]
	I0416 17:23:56.073791   45115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.crt.8f51567a ...
	I0416 17:23:56.073834   45115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.crt.8f51567a: {Name:mkb92c7f7d61010d09b3fce64d699ab7445a8052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:23:56.074029   45115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key.8f51567a ...
	I0416 17:23:56.074052   45115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key.8f51567a: {Name:mk12875efe82ba9ce5d07be79e4225664f50bca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:23:56.074196   45115 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.crt.8f51567a -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.crt
	I0416 17:23:56.074334   45115 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key.8f51567a -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key
	I0416 17:23:56.074447   45115 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.key
	I0416 17:23:56.074470   45115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.crt with IP's: []
	I0416 17:23:56.354228   45115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.crt ...
	I0416 17:23:56.354257   45115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.crt: {Name:mk351e72514648f18e12fd76276a8feb725f19b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:23:56.354411   45115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.key ...
	I0416 17:23:56.354424   45115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.key: {Name:mka778774f8b4c4bd33eb26b868dea4f74deca25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:23:56.354615   45115 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:23:56.354651   45115 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:23:56.354662   45115 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:23:56.354694   45115 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:23:56.354720   45115 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:23:56.354741   45115 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:23:56.354777   45115 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:23:56.355362   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:23:56.385356   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:23:56.418822   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:23:56.447724   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:23:56.482782   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 17:23:56.517557   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 17:23:56.551262   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:23:56.581519   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:23:56.610760   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:23:56.637519   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:23:56.666832   45115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:23:56.695496   45115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:23:56.714824   45115 ssh_runner.go:195] Run: openssl version
	I0416 17:23:56.721645   45115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:23:56.738289   45115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:23:56.750971   45115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:23:56.751036   45115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:23:56.761119   45115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:23:56.780094   45115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:23:56.795991   45115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:23:56.808531   45115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:23:56.808596   45115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:23:56.817091   45115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:23:56.835589   45115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:23:56.847421   45115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:23:56.852593   45115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:23:56.852649   45115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:23:56.859059   45115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:23:56.870664   45115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:23:56.875458   45115 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:23:56.875535   45115 kubeadm.go:391] StartCluster: {Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:23:56.875645   45115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:23:56.875711   45115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:23:56.913738   45115 cri.go:89] found id: ""
	I0416 17:23:56.913811   45115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 17:23:56.924306   45115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:23:56.934190   45115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:23:56.944068   45115 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:23:56.944087   45115 kubeadm.go:156] found existing configuration files:
	
	I0416 17:23:56.944146   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:23:56.953487   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:23:56.953544   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:23:56.964023   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:23:56.973947   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:23:56.974011   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:23:56.984420   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:23:56.994576   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:23:56.994629   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:23:57.005201   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:23:57.014945   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:23:57.014994   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:23:57.025097   45115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:23:57.302512   45115 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:25:55.994371   45115 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:25:55.994481   45115 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 17:25:55.997417   45115 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:25:55.997490   45115 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:25:55.997586   45115 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:25:55.997706   45115 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:25:55.997833   45115 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:25:55.997909   45115 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:25:55.999644   45115 out.go:204]   - Generating certificates and keys ...
	I0416 17:25:55.999744   45115 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:25:55.999845   45115 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:25:55.999948   45115 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:25:56.000022   45115 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:25:56.000098   45115 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:25:56.000162   45115 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:25:56.000234   45115 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:25:56.000399   45115 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-795352] and IPs [192.168.50.168 127.0.0.1 ::1]
	I0416 17:25:56.000465   45115 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:25:56.000626   45115 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-795352] and IPs [192.168.50.168 127.0.0.1 ::1]
	I0416 17:25:56.000715   45115 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:25:56.000794   45115 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:25:56.000862   45115 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:25:56.000931   45115 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:25:56.000992   45115 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:25:56.001057   45115 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:25:56.001136   45115 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:25:56.001205   45115 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:25:56.001328   45115 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:25:56.001430   45115 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:25:56.001479   45115 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:25:56.001563   45115 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:25:56.003091   45115 out.go:204]   - Booting up control plane ...
	I0416 17:25:56.003203   45115 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:25:56.003304   45115 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:25:56.003414   45115 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:25:56.003539   45115 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:25:56.003780   45115 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:25:56.003863   45115 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:25:56.003956   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:25:56.004198   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:25:56.004314   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:25:56.004584   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:25:56.004670   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:25:56.004936   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:25:56.005020   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:25:56.005243   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:25:56.005328   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:25:56.005551   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:25:56.005557   45115 kubeadm.go:309] 
	I0416 17:25:56.005608   45115 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:25:56.005659   45115 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:25:56.005664   45115 kubeadm.go:309] 
	I0416 17:25:56.005709   45115 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:25:56.005751   45115 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:25:56.005881   45115 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:25:56.005888   45115 kubeadm.go:309] 
	I0416 17:25:56.006015   45115 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:25:56.006055   45115 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:25:56.006096   45115 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:25:56.006101   45115 kubeadm.go:309] 
	I0416 17:25:56.006247   45115 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:25:56.006344   45115 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:25:56.006352   45115 kubeadm.go:309] 
	I0416 17:25:56.006482   45115 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:25:56.006591   45115 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:25:56.006681   45115 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:25:56.006772   45115 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0416 17:25:56.006924   45115 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-795352] and IPs [192.168.50.168 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-795352] and IPs [192.168.50.168 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-795352] and IPs [192.168.50.168 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-795352] and IPs [192.168.50.168 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 17:25:56.006982   45115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 17:25:56.007275   45115 kubeadm.go:309] 
	I0416 17:25:58.573541   45115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.566530482s)
	I0416 17:25:58.573630   45115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:25:58.593490   45115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:25:58.609698   45115 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:25:58.609721   45115 kubeadm.go:156] found existing configuration files:
	
	I0416 17:25:58.609765   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:25:58.624157   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:25:58.624237   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:25:58.640048   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:25:58.654869   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:25:58.654945   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:25:58.667206   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:25:58.678485   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:25:58.678551   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:25:58.693719   45115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:25:58.708092   45115 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:25:58.708155   45115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:25:58.722520   45115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:25:58.799009   45115 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:25:58.799128   45115 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:25:58.982077   45115 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:25:58.982235   45115 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:25:58.982411   45115 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:25:59.217947   45115 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:25:59.219479   45115 out.go:204]   - Generating certificates and keys ...
	I0416 17:25:59.219581   45115 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:25:59.219664   45115 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:25:59.219919   45115 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 17:25:59.220151   45115 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 17:25:59.220795   45115 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 17:25:59.221130   45115 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 17:25:59.221795   45115 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 17:25:59.222558   45115 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 17:25:59.223165   45115 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 17:25:59.223682   45115 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 17:25:59.223936   45115 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 17:25:59.224077   45115 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:25:59.475778   45115 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:25:59.614871   45115 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:25:59.878982   45115 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:25:59.965832   45115 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:25:59.981292   45115 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:25:59.982921   45115 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:25:59.982992   45115 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:26:00.170478   45115 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:26:00.172197   45115 out.go:204]   - Booting up control plane ...
	I0416 17:26:00.172327   45115 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:26:00.187140   45115 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:26:00.188829   45115 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:26:00.191557   45115 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:26:00.207167   45115 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:26:40.210100   45115 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:26:40.210529   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:26:40.210848   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:26:45.212216   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:26:45.212503   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:26:55.212940   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:26:55.213161   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:27:15.214346   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:27:15.214511   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:27:55.214746   45115 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:27:55.215016   45115 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:27:55.215028   45115 kubeadm.go:309] 
	I0416 17:27:55.215103   45115 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:27:55.215178   45115 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:27:55.215194   45115 kubeadm.go:309] 
	I0416 17:27:55.215249   45115 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:27:55.215301   45115 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:27:55.215415   45115 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:27:55.215426   45115 kubeadm.go:309] 
	I0416 17:27:55.215559   45115 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:27:55.215612   45115 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:27:55.215653   45115 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:27:55.215662   45115 kubeadm.go:309] 
	I0416 17:27:55.215811   45115 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:27:55.215935   45115 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:27:55.215949   45115 kubeadm.go:309] 
	I0416 17:27:55.216106   45115 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:27:55.216217   45115 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:27:55.216334   45115 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:27:55.216446   45115 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 17:27:55.216458   45115 kubeadm.go:309] 
	I0416 17:27:55.217573   45115 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:27:55.217687   45115 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:27:55.217808   45115 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 17:27:55.217851   45115 kubeadm.go:393] duration metric: took 3m58.342323506s to StartCluster
	I0416 17:27:55.217898   45115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:27:55.217957   45115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:27:55.272815   45115 cri.go:89] found id: ""
	I0416 17:27:55.272863   45115 logs.go:276] 0 containers: []
	W0416 17:27:55.272875   45115 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:27:55.272883   45115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:27:55.272957   45115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:27:55.317244   45115 cri.go:89] found id: ""
	I0416 17:27:55.317273   45115 logs.go:276] 0 containers: []
	W0416 17:27:55.317283   45115 logs.go:278] No container was found matching "etcd"
	I0416 17:27:55.317290   45115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:27:55.317345   45115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:27:55.354900   45115 cri.go:89] found id: ""
	I0416 17:27:55.354924   45115 logs.go:276] 0 containers: []
	W0416 17:27:55.354934   45115 logs.go:278] No container was found matching "coredns"
	I0416 17:27:55.354942   45115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:27:55.355008   45115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:27:55.402964   45115 cri.go:89] found id: ""
	I0416 17:27:55.402993   45115 logs.go:276] 0 containers: []
	W0416 17:27:55.403004   45115 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:27:55.403012   45115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:27:55.403070   45115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:27:55.450504   45115 cri.go:89] found id: ""
	I0416 17:27:55.450536   45115 logs.go:276] 0 containers: []
	W0416 17:27:55.450546   45115 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:27:55.450555   45115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:27:55.450633   45115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:27:55.504652   45115 cri.go:89] found id: ""
	I0416 17:27:55.504681   45115 logs.go:276] 0 containers: []
	W0416 17:27:55.504692   45115 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:27:55.504701   45115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:27:55.504767   45115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:27:55.556712   45115 cri.go:89] found id: ""
	I0416 17:27:55.556744   45115 logs.go:276] 0 containers: []
	W0416 17:27:55.556754   45115 logs.go:278] No container was found matching "kindnet"
	I0416 17:27:55.556765   45115 logs.go:123] Gathering logs for container status ...
	I0416 17:27:55.556780   45115 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:27:55.601293   45115 logs.go:123] Gathering logs for kubelet ...
	I0416 17:27:55.601327   45115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:27:55.659597   45115 logs.go:123] Gathering logs for dmesg ...
	I0416 17:27:55.659626   45115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:27:55.677050   45115 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:27:55.677082   45115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:27:55.819117   45115 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:27:55.819149   45115 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:27:55.819164   45115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0416 17:27:55.937543   45115 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 17:27:55.937616   45115 out.go:239] * 
	* 
	W0416 17:27:55.937667   45115 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:27:55.937688   45115 out.go:239] * 
	* 
	W0416 17:27:55.938577   45115 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:27:55.942231   45115 out.go:177] 
	W0416 17:27:55.943421   45115 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:27:55.943492   45115 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 17:27:55.943533   45115 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 17:27:55.945125   45115 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-795352 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 6 (272.689923ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:27:56.263272   51691 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-795352" does not appear in /home/jenkins/minikube-integration/18649-3628/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-795352" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (300.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-795352 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-795352 create -f testdata/busybox.yaml: exit status 1 (45.863826ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-795352" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-795352 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 6 (253.467397ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:27:56.565786   51729 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-795352" does not appear in /home/jenkins/minikube-integration/18649-3628/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-795352" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 6 (258.563971ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:27:56.821882   51758 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-795352" does not appear in /home/jenkins/minikube-integration/18649-3628/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-795352" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-795352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-795352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m47.384798977s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-795352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-795352 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-795352 describe deploy/metrics-server -n kube-system: exit status 1 (43.595184ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-795352" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-795352 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 6 (228.814036ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:29:44.483961   52517 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-795352" does not appear in /home/jenkins/minikube-integration/18649-3628/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-795352" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-368813 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-368813 --alsologtostderr -v=3: exit status 82 (2m0.555254451s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-368813"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:28:57.995824   52276 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:28:57.996066   52276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:28:57.996077   52276 out.go:304] Setting ErrFile to fd 2...
	I0416 17:28:57.996080   52276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:28:57.996281   52276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:28:57.996512   52276 out.go:298] Setting JSON to false
	I0416 17:28:57.996583   52276 mustload.go:65] Loading cluster: no-preload-368813
	I0416 17:28:57.996943   52276 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:28:57.997009   52276 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/config.json ...
	I0416 17:28:57.997192   52276 mustload.go:65] Loading cluster: no-preload-368813
	I0416 17:28:57.997346   52276 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:28:57.997387   52276 stop.go:39] StopHost: no-preload-368813
	I0416 17:28:57.997789   52276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:28:57.997831   52276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:28:58.013143   52276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0416 17:28:58.013524   52276 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:28:58.014108   52276 main.go:141] libmachine: Using API Version  1
	I0416 17:28:58.014132   52276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:28:58.014449   52276 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:28:58.016724   52276 out.go:177] * Stopping node "no-preload-368813"  ...
	I0416 17:28:58.017957   52276 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 17:28:58.017984   52276 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:28:58.018367   52276 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 17:28:58.018411   52276 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:28:58.021798   52276 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:28:58.022201   52276 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:27:39 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:28:58.022295   52276 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:28:58.022467   52276 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:28:58.022668   52276 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:28:58.022789   52276 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:28:58.022946   52276 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:28:58.140705   52276 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 17:28:58.204705   52276 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 17:28:58.287741   52276 main.go:141] libmachine: Stopping "no-preload-368813"...
	I0416 17:28:58.287766   52276 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:28:58.289539   52276 main.go:141] libmachine: (no-preload-368813) Calling .Stop
	I0416 17:28:58.293762   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 0/120
	I0416 17:28:59.295501   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 1/120
	I0416 17:29:00.297100   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 2/120
	I0416 17:29:01.299256   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 3/120
	I0416 17:29:02.300396   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 4/120
	I0416 17:29:03.302476   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 5/120
	I0416 17:29:04.303787   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 6/120
	I0416 17:29:05.305314   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 7/120
	I0416 17:29:06.306661   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 8/120
	I0416 17:29:07.307848   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 9/120
	I0416 17:29:08.309950   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 10/120
	I0416 17:29:09.311205   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 11/120
	I0416 17:29:10.312494   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 12/120
	I0416 17:29:11.313931   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 13/120
	I0416 17:29:12.316110   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 14/120
	I0416 17:29:13.317959   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 15/120
	I0416 17:29:14.319338   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 16/120
	I0416 17:29:15.320702   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 17/120
	I0416 17:29:16.322898   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 18/120
	I0416 17:29:17.324229   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 19/120
	I0416 17:29:18.326135   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 20/120
	I0416 17:29:19.327763   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 21/120
	I0416 17:29:20.329026   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 22/120
	I0416 17:29:21.331325   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 23/120
	I0416 17:29:22.332553   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 24/120
	I0416 17:29:23.334309   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 25/120
	I0416 17:29:24.335489   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 26/120
	I0416 17:29:25.336879   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 27/120
	I0416 17:29:26.338130   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 28/120
	I0416 17:29:27.339310   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 29/120
	I0416 17:29:28.340574   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 30/120
	I0416 17:29:29.341815   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 31/120
	I0416 17:29:30.343141   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 32/120
	I0416 17:29:31.344452   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 33/120
	I0416 17:29:32.346000   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 34/120
	I0416 17:29:33.348107   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 35/120
	I0416 17:29:34.349707   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 36/120
	I0416 17:29:35.351026   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 37/120
	I0416 17:29:36.352844   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 38/120
	I0416 17:29:37.353997   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 39/120
	I0416 17:29:38.356066   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 40/120
	I0416 17:29:39.357429   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 41/120
	I0416 17:29:40.358798   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 42/120
	I0416 17:29:41.360131   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 43/120
	I0416 17:29:42.361642   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 44/120
	I0416 17:29:43.363717   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 45/120
	I0416 17:29:44.364829   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 46/120
	I0416 17:29:45.366090   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 47/120
	I0416 17:29:46.367353   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 48/120
	I0416 17:29:47.368649   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 49/120
	I0416 17:29:48.370801   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 50/120
	I0416 17:29:49.372293   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 51/120
	I0416 17:29:50.373832   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 52/120
	I0416 17:29:51.375286   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 53/120
	I0416 17:29:52.377650   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 54/120
	I0416 17:29:53.379363   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 55/120
	I0416 17:29:54.381753   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 56/120
	I0416 17:29:55.383120   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 57/120
	I0416 17:29:56.384515   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 58/120
	I0416 17:29:57.385840   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 59/120
	I0416 17:29:58.387923   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 60/120
	I0416 17:29:59.389838   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 61/120
	I0416 17:30:00.391428   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 62/120
	I0416 17:30:01.392738   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 63/120
	I0416 17:30:02.394270   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 64/120
	I0416 17:30:03.396244   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 65/120
	I0416 17:30:04.397623   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 66/120
	I0416 17:30:05.399052   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 67/120
	I0416 17:30:06.400277   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 68/120
	I0416 17:30:07.401628   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 69/120
	I0416 17:30:08.403033   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 70/120
	I0416 17:30:09.404832   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 71/120
	I0416 17:30:10.406111   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 72/120
	I0416 17:30:11.407598   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 73/120
	I0416 17:30:12.409103   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 74/120
	I0416 17:30:13.411392   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 75/120
	I0416 17:30:14.412904   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 76/120
	I0416 17:30:15.414374   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 77/120
	I0416 17:30:16.416103   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 78/120
	I0416 17:30:17.417744   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 79/120
	I0416 17:30:18.419727   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 80/120
	I0416 17:30:19.422037   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 81/120
	I0416 17:30:20.423413   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 82/120
	I0416 17:30:21.424741   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 83/120
	I0416 17:30:22.426179   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 84/120
	I0416 17:30:23.428195   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 85/120
	I0416 17:30:24.429535   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 86/120
	I0416 17:30:25.431624   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 87/120
	I0416 17:30:26.433059   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 88/120
	I0416 17:30:27.434255   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 89/120
	I0416 17:30:28.436196   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 90/120
	I0416 17:30:29.437728   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 91/120
	I0416 17:30:30.439297   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 92/120
	I0416 17:30:31.441107   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 93/120
	I0416 17:30:32.442471   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 94/120
	I0416 17:30:33.444316   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 95/120
	I0416 17:30:34.445714   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 96/120
	I0416 17:30:35.447142   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 97/120
	I0416 17:30:36.448785   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 98/120
	I0416 17:30:37.450086   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 99/120
	I0416 17:30:38.452141   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 100/120
	I0416 17:30:39.453569   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 101/120
	I0416 17:30:40.455322   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 102/120
	I0416 17:30:41.456645   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 103/120
	I0416 17:30:42.457935   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 104/120
	I0416 17:30:43.459877   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 105/120
	I0416 17:30:44.461106   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 106/120
	I0416 17:30:45.463133   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 107/120
	I0416 17:30:46.464431   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 108/120
	I0416 17:30:47.466006   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 109/120
	I0416 17:30:48.467990   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 110/120
	I0416 17:30:49.469407   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 111/120
	I0416 17:30:50.470756   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 112/120
	I0416 17:30:51.472025   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 113/120
	I0416 17:30:52.473412   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 114/120
	I0416 17:30:53.475240   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 115/120
	I0416 17:30:54.476610   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 116/120
	I0416 17:30:55.477945   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 117/120
	I0416 17:30:56.479196   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 118/120
	I0416 17:30:57.480498   52276 main.go:141] libmachine: (no-preload-368813) Waiting for machine to stop 119/120
	I0416 17:30:58.481912   52276 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 17:30:58.481959   52276 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 17:30:58.483724   52276 out.go:177] 
	W0416 17:30:58.484982   52276 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 17:30:58.484996   52276 out.go:239] * 
	* 
	W0416 17:30:58.487958   52276 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:30:58.489467   52276 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-368813 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813: exit status 3 (18.597761795s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:31:17.089160   53094 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.33:22: connect: no route to host
	E0416 17:31:17.089183   53094 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.33:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-368813" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-512869 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-512869 --alsologtostderr -v=3: exit status 82 (2m0.51662348s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-512869"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:28:58.825337   52315 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:28:58.825487   52315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:28:58.825499   52315 out.go:304] Setting ErrFile to fd 2...
	I0416 17:28:58.825506   52315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:28:58.825716   52315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:28:58.825959   52315 out.go:298] Setting JSON to false
	I0416 17:28:58.826063   52315 mustload.go:65] Loading cluster: embed-certs-512869
	I0416 17:28:58.826396   52315 config.go:182] Loaded profile config "embed-certs-512869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:28:58.826478   52315 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/embed-certs-512869/config.json ...
	I0416 17:28:58.826657   52315 mustload.go:65] Loading cluster: embed-certs-512869
	I0416 17:28:58.826782   52315 config.go:182] Loaded profile config "embed-certs-512869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:28:58.826822   52315 stop.go:39] StopHost: embed-certs-512869
	I0416 17:28:58.827216   52315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:28:58.827276   52315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:28:58.841962   52315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39109
	I0416 17:28:58.842399   52315 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:28:58.842942   52315 main.go:141] libmachine: Using API Version  1
	I0416 17:28:58.842963   52315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:28:58.843300   52315 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:28:58.845873   52315 out.go:177] * Stopping node "embed-certs-512869"  ...
	I0416 17:28:58.847342   52315 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 17:28:58.847380   52315 main.go:141] libmachine: (embed-certs-512869) Calling .DriverName
	I0416 17:28:58.847597   52315 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 17:28:58.847632   52315 main.go:141] libmachine: (embed-certs-512869) Calling .GetSSHHostname
	I0416 17:28:58.849953   52315 main.go:141] libmachine: (embed-certs-512869) DBG | domain embed-certs-512869 has defined MAC address 52:54:00:9f:eb:19 in network mk-embed-certs-512869
	I0416 17:28:58.850360   52315 main.go:141] libmachine: (embed-certs-512869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:eb:19", ip: ""} in network mk-embed-certs-512869: {Iface:virbr3 ExpiryTime:2024-04-16 18:28:04 +0000 UTC Type:0 Mac:52:54:00:9f:eb:19 Iaid: IPaddr:192.168.83.141 Prefix:24 Hostname:embed-certs-512869 Clientid:01:52:54:00:9f:eb:19}
	I0416 17:28:58.850396   52315 main.go:141] libmachine: (embed-certs-512869) DBG | domain embed-certs-512869 has defined IP address 192.168.83.141 and MAC address 52:54:00:9f:eb:19 in network mk-embed-certs-512869
	I0416 17:28:58.850497   52315 main.go:141] libmachine: (embed-certs-512869) Calling .GetSSHPort
	I0416 17:28:58.850670   52315 main.go:141] libmachine: (embed-certs-512869) Calling .GetSSHKeyPath
	I0416 17:28:58.850819   52315 main.go:141] libmachine: (embed-certs-512869) Calling .GetSSHUsername
	I0416 17:28:58.850973   52315 sshutil.go:53] new ssh client: &{IP:192.168.83.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/embed-certs-512869/id_rsa Username:docker}
	I0416 17:28:58.950982   52315 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 17:28:59.020267   52315 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 17:28:59.089494   52315 main.go:141] libmachine: Stopping "embed-certs-512869"...
	I0416 17:28:59.089523   52315 main.go:141] libmachine: (embed-certs-512869) Calling .GetState
	I0416 17:28:59.091165   52315 main.go:141] libmachine: (embed-certs-512869) Calling .Stop
	I0416 17:28:59.094536   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 0/120
	I0416 17:29:00.095899   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 1/120
	I0416 17:29:01.097088   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 2/120
	I0416 17:29:02.098541   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 3/120
	I0416 17:29:03.100747   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 4/120
	I0416 17:29:04.102905   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 5/120
	I0416 17:29:05.104116   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 6/120
	I0416 17:29:06.105388   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 7/120
	I0416 17:29:07.106688   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 8/120
	I0416 17:29:08.107948   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 9/120
	I0416 17:29:09.109207   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 10/120
	I0416 17:29:10.110465   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 11/120
	I0416 17:29:11.111727   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 12/120
	I0416 17:29:12.113083   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 13/120
	I0416 17:29:13.114429   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 14/120
	I0416 17:29:14.116089   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 15/120
	I0416 17:29:15.117278   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 16/120
	I0416 17:29:16.118520   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 17/120
	I0416 17:29:17.119761   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 18/120
	I0416 17:29:18.121147   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 19/120
	I0416 17:29:19.123191   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 20/120
	I0416 17:29:20.124599   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 21/120
	I0416 17:29:21.125793   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 22/120
	I0416 17:29:22.127181   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 23/120
	I0416 17:29:23.128469   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 24/120
	I0416 17:29:24.130048   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 25/120
	I0416 17:29:25.131388   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 26/120
	I0416 17:29:26.132953   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 27/120
	I0416 17:29:27.134200   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 28/120
	I0416 17:29:28.135407   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 29/120
	I0416 17:29:29.137437   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 30/120
	I0416 17:29:30.138742   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 31/120
	I0416 17:29:31.139955   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 32/120
	I0416 17:29:32.141335   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 33/120
	I0416 17:29:33.142611   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 34/120
	I0416 17:29:34.144206   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 35/120
	I0416 17:29:35.145478   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 36/120
	I0416 17:29:36.146811   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 37/120
	I0416 17:29:37.148121   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 38/120
	I0416 17:29:38.149519   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 39/120
	I0416 17:29:39.151213   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 40/120
	I0416 17:29:40.152629   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 41/120
	I0416 17:29:41.154125   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 42/120
	I0416 17:29:42.155604   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 43/120
	I0416 17:29:43.156982   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 44/120
	I0416 17:29:44.158615   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 45/120
	I0416 17:29:45.160042   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 46/120
	I0416 17:29:46.161316   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 47/120
	I0416 17:29:47.162610   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 48/120
	I0416 17:29:48.163970   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 49/120
	I0416 17:29:49.165792   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 50/120
	I0416 17:29:50.167012   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 51/120
	I0416 17:29:51.168413   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 52/120
	I0416 17:29:52.169789   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 53/120
	I0416 17:29:53.171668   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 54/120
	I0416 17:29:54.173653   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 55/120
	I0416 17:29:55.175004   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 56/120
	I0416 17:29:56.177043   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 57/120
	I0416 17:29:57.178115   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 58/120
	I0416 17:29:58.179459   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 59/120
	I0416 17:29:59.181694   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 60/120
	I0416 17:30:00.183225   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 61/120
	I0416 17:30:01.184520   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 62/120
	I0416 17:30:02.186031   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 63/120
	I0416 17:30:03.187262   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 64/120
	I0416 17:30:04.188983   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 65/120
	I0416 17:30:05.190240   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 66/120
	I0416 17:30:06.191634   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 67/120
	I0416 17:30:07.193072   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 68/120
	I0416 17:30:08.194364   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 69/120
	I0416 17:30:09.196440   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 70/120
	I0416 17:30:10.197816   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 71/120
	I0416 17:30:11.199508   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 72/120
	I0416 17:30:12.201029   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 73/120
	I0416 17:30:13.203569   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 74/120
	I0416 17:30:14.205684   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 75/120
	I0416 17:30:15.207386   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 76/120
	I0416 17:30:16.209229   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 77/120
	I0416 17:30:17.211328   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 78/120
	I0416 17:30:18.212672   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 79/120
	I0416 17:30:19.214665   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 80/120
	I0416 17:30:20.216260   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 81/120
	I0416 17:30:21.217600   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 82/120
	I0416 17:30:22.219429   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 83/120
	I0416 17:30:23.220761   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 84/120
	I0416 17:30:24.222445   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 85/120
	I0416 17:30:25.224142   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 86/120
	I0416 17:30:26.225769   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 87/120
	I0416 17:30:27.227111   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 88/120
	I0416 17:30:28.228339   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 89/120
	I0416 17:30:29.230591   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 90/120
	I0416 17:30:30.232064   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 91/120
	I0416 17:30:31.233552   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 92/120
	I0416 17:30:32.234638   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 93/120
	I0416 17:30:33.235914   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 94/120
	I0416 17:30:34.237888   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 95/120
	I0416 17:30:35.239191   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 96/120
	I0416 17:30:36.240435   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 97/120
	I0416 17:30:37.242136   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 98/120
	I0416 17:30:38.243416   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 99/120
	I0416 17:30:39.245403   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 100/120
	I0416 17:30:40.247268   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 101/120
	I0416 17:30:41.248636   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 102/120
	I0416 17:30:42.249972   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 103/120
	I0416 17:30:43.251375   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 104/120
	I0416 17:30:44.253236   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 105/120
	I0416 17:30:45.255238   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 106/120
	I0416 17:30:46.256443   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 107/120
	I0416 17:30:47.257737   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 108/120
	I0416 17:30:48.259150   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 109/120
	I0416 17:30:49.261378   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 110/120
	I0416 17:30:50.262683   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 111/120
	I0416 17:30:51.264047   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 112/120
	I0416 17:30:52.265755   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 113/120
	I0416 17:30:53.267128   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 114/120
	I0416 17:30:54.268866   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 115/120
	I0416 17:30:55.270078   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 116/120
	I0416 17:30:56.271310   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 117/120
	I0416 17:30:57.272560   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 118/120
	I0416 17:30:58.273724   52315 main.go:141] libmachine: (embed-certs-512869) Waiting for machine to stop 119/120
	I0416 17:30:59.275225   52315 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 17:30:59.275310   52315 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 17:30:59.277015   52315 out.go:177] 
	W0416 17:30:59.278436   52315 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 17:30:59.278459   52315 out.go:239] * 
	* 
	W0416 17:30:59.283065   52315 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:30:59.284461   52315 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-512869 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869: exit status 3 (18.570119238s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:31:17.857255   53124 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.141:22: connect: no route to host
	E0416 17:31:17.857278   53124 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.141:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-512869" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (513.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-795352 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-795352 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m32.145697212s)

                                                
                                                
-- stdout --
	* [old-k8s-version-795352] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-795352" primary control-plane node in "old-k8s-version-795352" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-795352" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:29:50.036251   52649 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:29:50.036374   52649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:29:50.036385   52649 out.go:304] Setting ErrFile to fd 2...
	I0416 17:29:50.036391   52649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:29:50.036592   52649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:29:50.037172   52649 out.go:298] Setting JSON to false
	I0416 17:29:50.038072   52649 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4342,"bootTime":1713284248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:29:50.038136   52649 start.go:139] virtualization: kvm guest
	I0416 17:29:50.040402   52649 out.go:177] * [old-k8s-version-795352] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:29:50.041981   52649 notify.go:220] Checking for updates...
	I0416 17:29:50.041990   52649 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:29:50.043323   52649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:29:50.044622   52649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:29:50.045935   52649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:29:50.047219   52649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:29:50.048398   52649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:29:50.049925   52649 config.go:182] Loaded profile config "old-k8s-version-795352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 17:29:50.050307   52649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:29:50.050367   52649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:29:50.064653   52649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34873
	I0416 17:29:50.065058   52649 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:29:50.065655   52649 main.go:141] libmachine: Using API Version  1
	I0416 17:29:50.065682   52649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:29:50.066002   52649 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:29:50.066168   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:29:50.067965   52649 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0416 17:29:50.069209   52649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:29:50.069483   52649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:29:50.069514   52649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:29:50.084883   52649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44417
	I0416 17:29:50.085197   52649 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:29:50.085632   52649 main.go:141] libmachine: Using API Version  1
	I0416 17:29:50.085654   52649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:29:50.085932   52649 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:29:50.086129   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:29:50.121237   52649 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:29:50.122659   52649 start.go:297] selected driver: kvm2
	I0416 17:29:50.122671   52649 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:29:50.122764   52649 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:29:50.123382   52649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:29:50.123450   52649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:29:50.137481   52649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:29:50.137803   52649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:29:50.137865   52649 cni.go:84] Creating CNI manager for ""
	I0416 17:29:50.137878   52649 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:29:50.137908   52649 start.go:340] cluster config:
	{Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:29:50.138006   52649 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:29:50.140053   52649 out.go:177] * Starting "old-k8s-version-795352" primary control-plane node in "old-k8s-version-795352" cluster
	I0416 17:29:50.141395   52649 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 17:29:50.141429   52649 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 17:29:50.141439   52649 cache.go:56] Caching tarball of preloaded images
	I0416 17:29:50.141512   52649 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:29:50.141534   52649 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 17:29:50.141623   52649 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/config.json ...
	I0416 17:29:50.141789   52649 start.go:360] acquireMachinesLock for old-k8s-version-795352: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:29:50.141831   52649 start.go:364] duration metric: took 23.64µs to acquireMachinesLock for "old-k8s-version-795352"
	I0416 17:29:50.141848   52649 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:29:50.141863   52649 fix.go:54] fixHost starting: 
	I0416 17:29:50.142093   52649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:29:50.142127   52649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:29:50.156703   52649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I0416 17:29:50.157176   52649 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:29:50.157614   52649 main.go:141] libmachine: Using API Version  1
	I0416 17:29:50.157634   52649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:29:50.157947   52649 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:29:50.158133   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:29:50.158283   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetState
	I0416 17:29:50.159806   52649 fix.go:112] recreateIfNeeded on old-k8s-version-795352: state=Stopped err=<nil>
	I0416 17:29:50.159843   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	W0416 17:29:50.159993   52649 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:29:50.161789   52649 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-795352" ...
	I0416 17:29:50.163070   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .Start
	I0416 17:29:50.163229   52649 main.go:141] libmachine: (old-k8s-version-795352) Ensuring networks are active...
	I0416 17:29:50.163965   52649 main.go:141] libmachine: (old-k8s-version-795352) Ensuring network default is active
	I0416 17:29:50.164268   52649 main.go:141] libmachine: (old-k8s-version-795352) Ensuring network mk-old-k8s-version-795352 is active
	I0416 17:29:50.164621   52649 main.go:141] libmachine: (old-k8s-version-795352) Getting domain xml...
	I0416 17:29:50.165313   52649 main.go:141] libmachine: (old-k8s-version-795352) Creating domain...
	I0416 17:29:51.344041   52649 main.go:141] libmachine: (old-k8s-version-795352) Waiting to get IP...
	I0416 17:29:51.345041   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:51.345470   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:51.345543   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:51.345462   52685 retry.go:31] will retry after 226.021981ms: waiting for machine to come up
	I0416 17:29:51.573124   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:51.573618   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:51.573644   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:51.573573   52685 retry.go:31] will retry after 297.554741ms: waiting for machine to come up
	I0416 17:29:51.873123   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:51.873616   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:51.873643   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:51.873568   52685 retry.go:31] will retry after 433.658457ms: waiting for machine to come up
	I0416 17:29:52.309128   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:52.309659   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:52.309691   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:52.309609   52685 retry.go:31] will retry after 565.332584ms: waiting for machine to come up
	I0416 17:29:52.876089   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:52.876553   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:52.876577   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:52.876500   52685 retry.go:31] will retry after 535.349376ms: waiting for machine to come up
	I0416 17:29:53.412975   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:53.413413   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:53.413440   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:53.413373   52685 retry.go:31] will retry after 874.295591ms: waiting for machine to come up
	I0416 17:29:54.289485   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:54.289965   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:54.289989   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:54.289912   52685 retry.go:31] will retry after 971.23522ms: waiting for machine to come up
	I0416 17:29:55.262846   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:55.263318   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:55.263344   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:55.263287   52685 retry.go:31] will retry after 958.566806ms: waiting for machine to come up
	I0416 17:29:56.223242   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:56.223630   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:56.223660   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:56.223585   52685 retry.go:31] will retry after 1.310719355s: waiting for machine to come up
	I0416 17:29:57.535501   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:57.536021   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:57.536051   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:57.535971   52685 retry.go:31] will retry after 2.282186434s: waiting for machine to come up
	I0416 17:29:59.819706   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:29:59.820249   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:29:59.820282   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:29:59.820172   52685 retry.go:31] will retry after 1.826464613s: waiting for machine to come up
	I0416 17:30:01.649211   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:01.649767   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:30:01.649812   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:30:01.649696   52685 retry.go:31] will retry after 2.339272856s: waiting for machine to come up
	I0416 17:30:03.992106   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:03.992552   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | unable to find current IP address of domain old-k8s-version-795352 in network mk-old-k8s-version-795352
	I0416 17:30:03.992585   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | I0416 17:30:03.992520   52685 retry.go:31] will retry after 4.245389555s: waiting for machine to come up
	I0416 17:30:08.241880   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.242371   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has current primary IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.242391   52649 main.go:141] libmachine: (old-k8s-version-795352) Found IP for machine: 192.168.50.168
	I0416 17:30:08.242404   52649 main.go:141] libmachine: (old-k8s-version-795352) Reserving static IP address...
	I0416 17:30:08.242823   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "old-k8s-version-795352", mac: "52:54:00:9a:48:0e", ip: "192.168.50.168"} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.242852   52649 main.go:141] libmachine: (old-k8s-version-795352) Reserved static IP address: 192.168.50.168
	I0416 17:30:08.242867   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | skip adding static IP to network mk-old-k8s-version-795352 - found existing host DHCP lease matching {name: "old-k8s-version-795352", mac: "52:54:00:9a:48:0e", ip: "192.168.50.168"}
	I0416 17:30:08.242879   52649 main.go:141] libmachine: (old-k8s-version-795352) Waiting for SSH to be available...
	I0416 17:30:08.242895   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | Getting to WaitForSSH function...
	I0416 17:30:08.245100   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.245389   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.245418   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.245494   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | Using SSH client type: external
	I0416 17:30:08.245529   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa (-rw-------)
	I0416 17:30:08.245574   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:30:08.245598   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | About to run SSH command:
	I0416 17:30:08.245610   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | exit 0
	I0416 17:30:08.368621   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | SSH cmd err, output: <nil>: 
	I0416 17:30:08.368977   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetConfigRaw
	I0416 17:30:08.369618   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:30:08.371792   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.372123   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.372148   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.372354   52649 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/config.json ...
	I0416 17:30:08.372518   52649 machine.go:94] provisionDockerMachine start ...
	I0416 17:30:08.372533   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:30:08.372737   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:08.374747   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.375063   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.375089   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.375245   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:08.375414   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.375567   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.375715   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:08.375870   52649 main.go:141] libmachine: Using SSH client type: native
	I0416 17:30:08.376063   52649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:30:08.376077   52649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:30:08.481881   52649 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:30:08.481910   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetMachineName
	I0416 17:30:08.482193   52649 buildroot.go:166] provisioning hostname "old-k8s-version-795352"
	I0416 17:30:08.482220   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetMachineName
	I0416 17:30:08.482402   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:08.484628   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.485049   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.485075   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.485270   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:08.485490   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.485651   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.485806   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:08.485947   52649 main.go:141] libmachine: Using SSH client type: native
	I0416 17:30:08.486133   52649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:30:08.486150   52649 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-795352 && echo "old-k8s-version-795352" | sudo tee /etc/hostname
	I0416 17:30:08.604441   52649 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-795352
	
	I0416 17:30:08.604468   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:08.607148   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.607467   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.607498   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.607639   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:08.607825   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.607987   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.608138   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:08.608281   52649 main.go:141] libmachine: Using SSH client type: native
	I0416 17:30:08.608454   52649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:30:08.608473   52649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-795352' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-795352/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-795352' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:30:08.723322   52649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:30:08.723350   52649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:30:08.723368   52649 buildroot.go:174] setting up certificates
	I0416 17:30:08.723377   52649 provision.go:84] configureAuth start
	I0416 17:30:08.723385   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetMachineName
	I0416 17:30:08.723673   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:30:08.726599   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.727035   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.727073   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.727196   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:08.729366   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.729740   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.729800   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.729918   52649 provision.go:143] copyHostCerts
	I0416 17:30:08.729963   52649 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:30:08.729979   52649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:30:08.730045   52649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:30:08.730137   52649 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:30:08.730145   52649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:30:08.730168   52649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:30:08.730224   52649 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:30:08.730231   52649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:30:08.730251   52649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:30:08.730303   52649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-795352 san=[127.0.0.1 192.168.50.168 localhost minikube old-k8s-version-795352]
	I0416 17:30:08.797270   52649 provision.go:177] copyRemoteCerts
	I0416 17:30:08.797318   52649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:30:08.797341   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:08.799722   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.800085   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.800122   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.800340   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:08.800521   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.800678   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:08.800786   52649 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:30:08.883994   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:30:08.908472   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 17:30:08.935131   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:30:08.962061   52649 provision.go:87] duration metric: took 238.672611ms to configureAuth
	I0416 17:30:08.962086   52649 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:30:08.962280   52649 config.go:182] Loaded profile config "old-k8s-version-795352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 17:30:08.962357   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:08.965037   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.965396   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:08.965427   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:08.965559   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:08.965752   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.965902   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:08.966029   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:08.966187   52649 main.go:141] libmachine: Using SSH client type: native
	I0416 17:30:08.966344   52649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:30:08.966359   52649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:30:09.256014   52649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:30:09.256051   52649 machine.go:97] duration metric: took 883.513712ms to provisionDockerMachine
	I0416 17:30:09.256066   52649 start.go:293] postStartSetup for "old-k8s-version-795352" (driver="kvm2")
	I0416 17:30:09.256078   52649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:30:09.256112   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:30:09.256438   52649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:30:09.256463   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:09.258757   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.259144   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:09.259174   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.259323   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:09.259509   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:09.259674   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:09.259832   52649 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:30:09.344980   52649 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:30:09.349589   52649 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:30:09.349610   52649 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:30:09.349662   52649 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:30:09.349730   52649 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:30:09.349818   52649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:30:09.360612   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:30:09.386630   52649 start.go:296] duration metric: took 130.552231ms for postStartSetup
	I0416 17:30:09.386664   52649 fix.go:56] duration metric: took 19.244805696s for fixHost
	I0416 17:30:09.386691   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:09.389498   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.389929   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:09.389962   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.390131   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:09.390348   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:09.390532   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:09.390704   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:09.390886   52649 main.go:141] libmachine: Using SSH client type: native
	I0416 17:30:09.391086   52649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0416 17:30:09.391099   52649 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 17:30:09.502008   52649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713288609.454292671
	
	I0416 17:30:09.502029   52649 fix.go:216] guest clock: 1713288609.454292671
	I0416 17:30:09.502039   52649 fix.go:229] Guest: 2024-04-16 17:30:09.454292671 +0000 UTC Remote: 2024-04-16 17:30:09.38666844 +0000 UTC m=+19.395935207 (delta=67.624231ms)
	I0416 17:30:09.502082   52649 fix.go:200] guest clock delta is within tolerance: 67.624231ms
	I0416 17:30:09.502089   52649 start.go:83] releasing machines lock for "old-k8s-version-795352", held for 19.360245851s
	I0416 17:30:09.502117   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:30:09.502375   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:30:09.504956   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.505300   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:09.505334   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.505403   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:30:09.505884   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:30:09.506041   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .DriverName
	I0416 17:30:09.506108   52649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:30:09.506145   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:09.506391   52649 ssh_runner.go:195] Run: cat /version.json
	I0416 17:30:09.506411   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHHostname
	I0416 17:30:09.508876   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.509170   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:09.509208   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.509294   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.509354   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:09.509534   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:09.509669   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:09.509718   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:09.509740   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:09.509822   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHPort
	I0416 17:30:09.509894   52649 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:30:09.509992   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHKeyPath
	I0416 17:30:09.510106   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetSSHUsername
	I0416 17:30:09.510266   52649 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/old-k8s-version-795352/id_rsa Username:docker}
	I0416 17:30:09.586347   52649 ssh_runner.go:195] Run: systemctl --version
	I0416 17:30:09.610798   52649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:30:09.767069   52649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:30:09.773896   52649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:30:09.773945   52649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:30:09.791019   52649 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:30:09.791041   52649 start.go:494] detecting cgroup driver to use...
	I0416 17:30:09.791090   52649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:30:09.807342   52649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:30:09.821339   52649 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:30:09.821386   52649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:30:09.836149   52649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:30:09.850191   52649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:30:09.972199   52649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:30:10.135105   52649 docker.go:233] disabling docker service ...
	I0416 17:30:10.135171   52649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:30:10.153156   52649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:30:10.168731   52649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:30:10.325677   52649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:30:10.464413   52649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:30:10.480768   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:30:10.501227   52649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 17:30:10.501300   52649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:30:10.512154   52649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:30:10.512204   52649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:30:10.523048   52649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:30:10.534922   52649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:30:10.546797   52649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:30:10.558986   52649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:30:10.569940   52649 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:30:10.569988   52649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:30:10.586400   52649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:30:10.596341   52649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:30:10.728629   52649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:30:10.888158   52649 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:30:10.888220   52649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:30:10.893639   52649 start.go:562] Will wait 60s for crictl version
	I0416 17:30:10.893698   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:10.897760   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:30:10.938445   52649 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:30:10.938524   52649 ssh_runner.go:195] Run: crio --version
	I0416 17:30:10.969918   52649 ssh_runner.go:195] Run: crio --version
	I0416 17:30:11.005430   52649 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 17:30:11.006812   52649 main.go:141] libmachine: (old-k8s-version-795352) Calling .GetIP
	I0416 17:30:11.009484   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:11.009850   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:48:0e", ip: ""} in network mk-old-k8s-version-795352: {Iface:virbr2 ExpiryTime:2024-04-16 18:30:02 +0000 UTC Type:0 Mac:52:54:00:9a:48:0e Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:old-k8s-version-795352 Clientid:01:52:54:00:9a:48:0e}
	I0416 17:30:11.009870   52649 main.go:141] libmachine: (old-k8s-version-795352) DBG | domain old-k8s-version-795352 has defined IP address 192.168.50.168 and MAC address 52:54:00:9a:48:0e in network mk-old-k8s-version-795352
	I0416 17:30:11.010125   52649 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 17:30:11.014663   52649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:30:11.028467   52649 kubeadm.go:877] updating cluster {Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:30:11.028578   52649 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 17:30:11.028624   52649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:30:11.074209   52649 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 17:30:11.074266   52649 ssh_runner.go:195] Run: which lz4
	I0416 17:30:11.078662   52649 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0416 17:30:11.083356   52649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:30:11.083396   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 17:30:13.017521   52649 crio.go:462] duration metric: took 1.938879049s to copy over tarball
	I0416 17:30:13.017598   52649 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:30:16.315009   52649 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.297381406s)
	I0416 17:30:16.315040   52649 crio.go:469] duration metric: took 3.297490182s to extract the tarball
	I0416 17:30:16.315049   52649 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:30:16.363274   52649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:30:16.410175   52649 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 17:30:16.410204   52649 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 17:30:16.410272   52649 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:30:16.410304   52649 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:30:16.410335   52649 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 17:30:16.410516   52649 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 17:30:16.410529   52649 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:30:16.410314   52649 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:30:16.410604   52649 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:30:16.410622   52649 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:30:16.411893   52649 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:30:16.411898   52649 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:30:16.411895   52649 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:30:16.411914   52649 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 17:30:16.411893   52649 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:30:16.411901   52649 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 17:30:16.411948   52649 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:30:16.411895   52649 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:30:16.561209   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:30:16.573835   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:30:16.583279   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 17:30:16.588190   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 17:30:16.628344   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:30:16.647287   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:30:16.682301   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 17:30:16.683843   52649 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 17:30:16.683882   52649 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:30:16.683936   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:16.683954   52649 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 17:30:16.684007   52649 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:30:16.684048   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:16.697211   52649 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:30:16.781965   52649 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 17:30:16.782011   52649 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 17:30:16.782052   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:16.812131   52649 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 17:30:16.812178   52649 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 17:30:16.812226   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:16.829079   52649 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 17:30:16.829170   52649 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 17:30:16.829209   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 17:30:16.829213   52649 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:30:16.829220   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 17:30:16.829181   52649 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:30:16.829252   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:16.829271   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:16.829129   52649 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 17:30:16.829365   52649 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 17:30:16.829392   52649 ssh_runner.go:195] Run: which crictl
	I0416 17:30:16.962111   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 17:30:16.962201   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 17:30:16.962283   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 17:30:16.962298   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 17:30:16.962330   52649 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 17:30:16.962367   52649 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 17:30:16.962429   52649 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 17:30:17.089649   52649 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 17:30:17.090011   52649 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 17:30:17.090056   52649 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 17:30:17.090150   52649 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 17:30:17.090168   52649 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 17:30:17.090199   52649 cache_images.go:92] duration metric: took 679.981126ms to LoadCachedImages
	W0416 17:30:17.090305   52649 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0416 17:30:17.090324   52649 kubeadm.go:928] updating node { 192.168.50.168 8443 v1.20.0 crio true true} ...
	I0416 17:30:17.090444   52649 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-795352 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:30:17.090519   52649 ssh_runner.go:195] Run: crio config
	I0416 17:30:17.140779   52649 cni.go:84] Creating CNI manager for ""
	I0416 17:30:17.140803   52649 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:30:17.140816   52649 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:30:17.140857   52649 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.168 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-795352 NodeName:old-k8s-version-795352 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 17:30:17.141002   52649 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-795352"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:30:17.141062   52649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 17:30:17.152306   52649 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:30:17.152376   52649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:30:17.162554   52649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0416 17:30:17.180879   52649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:30:17.198890   52649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0416 17:30:17.219212   52649 ssh_runner.go:195] Run: grep 192.168.50.168	control-plane.minikube.internal$ /etc/hosts
	I0416 17:30:17.223556   52649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:30:17.236944   52649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:30:17.365049   52649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:30:17.384717   52649 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352 for IP: 192.168.50.168
	I0416 17:30:17.384744   52649 certs.go:194] generating shared ca certs ...
	I0416 17:30:17.384764   52649 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:30:17.384993   52649 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:30:17.385050   52649 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:30:17.385064   52649 certs.go:256] generating profile certs ...
	I0416 17:30:17.385188   52649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.key
	I0416 17:30:17.385252   52649 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key.8f51567a
	I0416 17:30:17.385306   52649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.key
	I0416 17:30:17.385461   52649 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:30:17.385503   52649 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:30:17.385516   52649 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:30:17.385546   52649 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:30:17.385570   52649 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:30:17.385592   52649 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:30:17.385629   52649 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:30:17.386866   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:30:17.414626   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:30:17.442968   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:30:17.471667   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:30:17.504075   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 17:30:17.542249   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 17:30:17.574100   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:30:17.612874   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:30:17.642182   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:30:17.676408   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:30:17.705455   52649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:30:17.732050   52649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:30:17.750399   52649 ssh_runner.go:195] Run: openssl version
	I0416 17:30:17.756894   52649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:30:17.768974   52649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:30:17.773893   52649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:30:17.773945   52649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:30:17.780262   52649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:30:17.792203   52649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:30:17.804013   52649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:30:17.809419   52649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:30:17.809461   52649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:30:17.815701   52649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:30:17.827305   52649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:30:17.838804   52649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:30:17.843981   52649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:30:17.844017   52649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:30:17.850156   52649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:30:17.861955   52649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:30:17.866864   52649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:30:17.873511   52649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:30:17.879833   52649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:30:17.886966   52649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:30:17.893320   52649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:30:17.899731   52649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:30:17.906163   52649 kubeadm.go:391] StartCluster: {Name:old-k8s-version-795352 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-795352 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:30:17.906242   52649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:30:17.906318   52649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:30:17.951689   52649 cri.go:89] found id: ""
	I0416 17:30:17.951763   52649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 17:30:17.963003   52649 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 17:30:17.963020   52649 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 17:30:17.963026   52649 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 17:30:17.963066   52649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 17:30:17.973748   52649 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:30:17.974634   52649 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-795352" does not appear in /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:30:17.975275   52649 kubeconfig.go:62] /home/jenkins/minikube-integration/18649-3628/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-795352" cluster setting kubeconfig missing "old-k8s-version-795352" context setting]
	I0416 17:30:17.976267   52649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:30:17.978323   52649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 17:30:17.988518   52649 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.168
	I0416 17:30:17.988546   52649 kubeadm.go:1154] stopping kube-system containers ...
	I0416 17:30:17.988558   52649 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 17:30:17.988608   52649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:30:18.029591   52649 cri.go:89] found id: ""
	I0416 17:30:18.029663   52649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 17:30:18.046584   52649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:30:18.056483   52649 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:30:18.056502   52649 kubeadm.go:156] found existing configuration files:
	
	I0416 17:30:18.056538   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:30:18.065840   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:30:18.065884   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:30:18.075438   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:30:18.084866   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:30:18.084916   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:30:18.094460   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:30:18.103661   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:30:18.103699   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:30:18.113337   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:30:18.122993   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:30:18.123048   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:30:18.132927   52649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:30:18.142643   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:30:18.268414   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:30:19.088855   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:30:19.309610   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:30:19.417534   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:30:19.523844   52649 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:30:19.523920   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:20.025060   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:20.524126   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:21.024632   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:21.524597   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:22.024055   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:22.524040   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:23.024755   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:23.524182   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:24.024168   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:24.524992   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:25.024772   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:25.524987   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:26.024904   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:26.524683   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:27.024475   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:27.524804   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:28.024911   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:28.524374   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:29.024105   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:29.524492   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:30.024236   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:30.524008   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:31.024968   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:31.524123   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:32.024304   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:32.524295   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:33.024462   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:33.524970   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:34.024164   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:34.524276   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:35.024390   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:35.524023   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:36.024476   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:36.524825   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:37.024009   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:37.524031   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:38.024232   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:38.524449   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:39.024980   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:39.524907   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:40.024287   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:40.524936   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:41.024285   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:41.524889   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:42.024639   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:42.524775   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:43.024326   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:43.525077   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:44.024781   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:44.524887   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:45.024082   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:45.524577   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:46.025012   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:46.524033   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:47.024953   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:47.524158   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:48.024716   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:48.524385   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:49.024936   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:49.524917   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:50.024408   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:50.524802   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:51.024954   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:51.524247   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:52.023998   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:52.525011   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:53.024631   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:53.524115   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:54.024919   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:54.524267   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:55.024615   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:55.524012   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:56.024368   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:56.524603   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:57.024744   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:57.524194   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:58.024977   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:58.523973   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:59.024579   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:30:59.524263   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:00.024930   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:00.524007   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:01.024299   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:01.523980   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:02.024069   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:02.524910   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:03.024919   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:03.524185   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:04.023991   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:04.525031   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:05.024078   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:05.524960   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:06.024179   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:06.524672   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:07.024121   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:07.524225   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:08.024707   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:08.524076   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:09.024462   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:09.524029   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:10.024406   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:10.524213   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:11.024261   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:11.524153   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:12.024170   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:12.524589   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:13.024949   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:13.524447   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:14.024741   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:14.524919   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:15.024129   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:15.525038   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:16.024916   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:16.524972   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:17.024953   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:17.524958   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:18.024788   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:18.524962   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:19.024191   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:19.524465   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:19.524531   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:19.570822   52649 cri.go:89] found id: ""
	I0416 17:31:19.570849   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.570863   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:19.570871   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:19.570935   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:19.612986   52649 cri.go:89] found id: ""
	I0416 17:31:19.613017   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.613028   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:19.613037   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:19.613115   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:19.655432   52649 cri.go:89] found id: ""
	I0416 17:31:19.655455   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.655461   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:19.655466   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:19.655511   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:19.694542   52649 cri.go:89] found id: ""
	I0416 17:31:19.694575   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.694594   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:19.694602   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:19.694677   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:19.732584   52649 cri.go:89] found id: ""
	I0416 17:31:19.732617   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.732628   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:19.732634   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:19.732707   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:19.774596   52649 cri.go:89] found id: ""
	I0416 17:31:19.774620   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.774629   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:19.774635   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:19.774707   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:19.823013   52649 cri.go:89] found id: ""
	I0416 17:31:19.823043   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.823054   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:19.823061   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:19.823130   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:19.868258   52649 cri.go:89] found id: ""
	I0416 17:31:19.868284   52649 logs.go:276] 0 containers: []
	W0416 17:31:19.868294   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:19.868303   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:19.868316   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:19.927434   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:19.927466   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:19.942341   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:19.942368   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:20.080709   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:20.080736   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:20.080758   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:20.146359   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:20.146391   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:22.697248   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:22.713864   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:22.713936   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:22.760941   52649 cri.go:89] found id: ""
	I0416 17:31:22.760964   52649 logs.go:276] 0 containers: []
	W0416 17:31:22.760972   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:22.760979   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:22.761034   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:22.804293   52649 cri.go:89] found id: ""
	I0416 17:31:22.804313   52649 logs.go:276] 0 containers: []
	W0416 17:31:22.804339   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:22.804347   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:22.804390   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:22.848828   52649 cri.go:89] found id: ""
	I0416 17:31:22.848869   52649 logs.go:276] 0 containers: []
	W0416 17:31:22.848876   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:22.848882   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:22.848946   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:22.896640   52649 cri.go:89] found id: ""
	I0416 17:31:22.896664   52649 logs.go:276] 0 containers: []
	W0416 17:31:22.896674   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:22.896680   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:22.896735   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:22.933550   52649 cri.go:89] found id: ""
	I0416 17:31:22.933573   52649 logs.go:276] 0 containers: []
	W0416 17:31:22.933583   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:22.933604   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:22.933656   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:22.971388   52649 cri.go:89] found id: ""
	I0416 17:31:22.971411   52649 logs.go:276] 0 containers: []
	W0416 17:31:22.971420   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:22.971427   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:22.971482   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:23.009365   52649 cri.go:89] found id: ""
	I0416 17:31:23.009386   52649 logs.go:276] 0 containers: []
	W0416 17:31:23.009394   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:23.009398   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:23.009440   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:23.046371   52649 cri.go:89] found id: ""
	I0416 17:31:23.046402   52649 logs.go:276] 0 containers: []
	W0416 17:31:23.046413   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:23.046423   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:23.046436   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:23.118074   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:23.118105   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:23.163996   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:23.164025   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:23.217913   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:23.217944   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:23.232279   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:23.232302   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:23.307925   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:25.808168   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:25.822481   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:25.822555   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:25.861668   52649 cri.go:89] found id: ""
	I0416 17:31:25.861691   52649 logs.go:276] 0 containers: []
	W0416 17:31:25.861699   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:25.861704   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:25.861748   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:25.899929   52649 cri.go:89] found id: ""
	I0416 17:31:25.899961   52649 logs.go:276] 0 containers: []
	W0416 17:31:25.899972   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:25.899979   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:25.900029   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:25.938571   52649 cri.go:89] found id: ""
	I0416 17:31:25.938595   52649 logs.go:276] 0 containers: []
	W0416 17:31:25.938604   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:25.938610   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:25.938664   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:25.975149   52649 cri.go:89] found id: ""
	I0416 17:31:25.975174   52649 logs.go:276] 0 containers: []
	W0416 17:31:25.975183   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:25.975190   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:25.975254   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:26.011706   52649 cri.go:89] found id: ""
	I0416 17:31:26.011733   52649 logs.go:276] 0 containers: []
	W0416 17:31:26.011743   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:26.011749   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:26.011803   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:26.048499   52649 cri.go:89] found id: ""
	I0416 17:31:26.048524   52649 logs.go:276] 0 containers: []
	W0416 17:31:26.048533   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:26.048539   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:26.048584   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:26.086233   52649 cri.go:89] found id: ""
	I0416 17:31:26.086258   52649 logs.go:276] 0 containers: []
	W0416 17:31:26.086267   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:26.086273   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:26.086315   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:26.125087   52649 cri.go:89] found id: ""
	I0416 17:31:26.125116   52649 logs.go:276] 0 containers: []
	W0416 17:31:26.125126   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:26.125137   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:26.125150   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:26.180022   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:26.180051   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:26.194858   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:26.194881   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:26.266593   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:26.266624   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:26.266635   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:26.338521   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:26.338556   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:28.881886   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:28.896943   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:28.897010   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:28.932810   52649 cri.go:89] found id: ""
	I0416 17:31:28.932851   52649 logs.go:276] 0 containers: []
	W0416 17:31:28.932858   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:28.932866   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:28.932915   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:28.976605   52649 cri.go:89] found id: ""
	I0416 17:31:28.976629   52649 logs.go:276] 0 containers: []
	W0416 17:31:28.976639   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:28.976645   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:28.976703   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:29.015062   52649 cri.go:89] found id: ""
	I0416 17:31:29.015090   52649 logs.go:276] 0 containers: []
	W0416 17:31:29.015098   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:29.015103   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:29.015157   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:29.056903   52649 cri.go:89] found id: ""
	I0416 17:31:29.056933   52649 logs.go:276] 0 containers: []
	W0416 17:31:29.056943   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:29.056953   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:29.057006   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:29.115991   52649 cri.go:89] found id: ""
	I0416 17:31:29.116022   52649 logs.go:276] 0 containers: []
	W0416 17:31:29.116030   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:29.116036   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:29.116086   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:29.156116   52649 cri.go:89] found id: ""
	I0416 17:31:29.156144   52649 logs.go:276] 0 containers: []
	W0416 17:31:29.156151   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:29.156157   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:29.156205   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:29.192996   52649 cri.go:89] found id: ""
	I0416 17:31:29.193025   52649 logs.go:276] 0 containers: []
	W0416 17:31:29.193035   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:29.193041   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:29.193085   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:29.235516   52649 cri.go:89] found id: ""
	I0416 17:31:29.235546   52649 logs.go:276] 0 containers: []
	W0416 17:31:29.235554   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:29.235562   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:29.235573   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:29.316522   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:29.316559   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:29.358143   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:29.358167   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:29.410841   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:29.410870   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:29.425411   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:29.425437   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:29.504288   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:32.004720   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:32.019602   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:32.019666   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:32.062445   52649 cri.go:89] found id: ""
	I0416 17:31:32.062469   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.062476   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:32.062481   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:32.062543   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:32.101859   52649 cri.go:89] found id: ""
	I0416 17:31:32.101879   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.101885   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:32.101890   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:32.101938   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:32.140528   52649 cri.go:89] found id: ""
	I0416 17:31:32.140552   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.140563   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:32.140570   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:32.140624   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:32.184406   52649 cri.go:89] found id: ""
	I0416 17:31:32.184431   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.184441   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:32.184448   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:32.184497   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:32.228993   52649 cri.go:89] found id: ""
	I0416 17:31:32.229018   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.229028   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:32.229035   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:32.229092   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:32.266340   52649 cri.go:89] found id: ""
	I0416 17:31:32.266372   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.266384   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:32.266390   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:32.266449   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:32.308132   52649 cri.go:89] found id: ""
	I0416 17:31:32.308163   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.308171   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:32.308176   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:32.308229   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:32.346347   52649 cri.go:89] found id: ""
	I0416 17:31:32.346374   52649 logs.go:276] 0 containers: []
	W0416 17:31:32.346385   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:32.346396   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:32.346410   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:32.405636   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:32.405668   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:32.420159   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:32.420182   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:32.502094   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:32.502114   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:32.502129   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:32.589039   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:32.589072   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:35.135359   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:35.154107   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:35.154184   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:35.206077   52649 cri.go:89] found id: ""
	I0416 17:31:35.206097   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.206107   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:35.206115   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:35.206175   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:35.253416   52649 cri.go:89] found id: ""
	I0416 17:31:35.253442   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.253451   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:35.253458   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:35.253519   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:35.309954   52649 cri.go:89] found id: ""
	I0416 17:31:35.309982   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.309994   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:35.310001   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:35.310060   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:35.353479   52649 cri.go:89] found id: ""
	I0416 17:31:35.353514   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.353525   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:35.353532   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:35.353594   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:35.393253   52649 cri.go:89] found id: ""
	I0416 17:31:35.393280   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.393290   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:35.393296   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:35.393356   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:35.432685   52649 cri.go:89] found id: ""
	I0416 17:31:35.432717   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.432727   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:35.432733   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:35.432780   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:35.476297   52649 cri.go:89] found id: ""
	I0416 17:31:35.476330   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.476341   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:35.476350   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:35.476411   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:35.525645   52649 cri.go:89] found id: ""
	I0416 17:31:35.525672   52649 logs.go:276] 0 containers: []
	W0416 17:31:35.525680   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:35.525687   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:35.525699   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:35.545056   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:35.545088   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:35.654152   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:35.654173   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:35.654183   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:35.759619   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:35.759659   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:35.805662   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:35.805695   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:38.369312   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:38.385084   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:38.385207   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:38.427411   52649 cri.go:89] found id: ""
	I0416 17:31:38.427434   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.427441   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:38.427447   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:38.427489   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:38.466253   52649 cri.go:89] found id: ""
	I0416 17:31:38.466278   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.466287   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:38.466293   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:38.466355   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:38.507282   52649 cri.go:89] found id: ""
	I0416 17:31:38.507303   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.507313   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:38.507321   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:38.507374   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:38.548648   52649 cri.go:89] found id: ""
	I0416 17:31:38.548670   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.548680   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:38.548686   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:38.548730   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:38.586103   52649 cri.go:89] found id: ""
	I0416 17:31:38.586126   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.586133   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:38.586138   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:38.586189   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:38.631748   52649 cri.go:89] found id: ""
	I0416 17:31:38.631775   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.631786   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:38.631793   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:38.631851   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:38.673937   52649 cri.go:89] found id: ""
	I0416 17:31:38.673962   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.673973   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:38.673980   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:38.674041   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:38.718248   52649 cri.go:89] found id: ""
	I0416 17:31:38.718279   52649 logs.go:276] 0 containers: []
	W0416 17:31:38.718289   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:38.718299   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:38.718313   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:38.800045   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:38.800077   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:38.847617   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:38.847645   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:38.900993   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:38.901019   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:38.917397   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:38.917434   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:38.993641   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:41.494234   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:41.519090   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:41.519164   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:41.606101   52649 cri.go:89] found id: ""
	I0416 17:31:41.606129   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.606137   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:41.606143   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:41.606197   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:41.644206   52649 cri.go:89] found id: ""
	I0416 17:31:41.644234   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.644245   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:41.644252   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:41.644309   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:41.684534   52649 cri.go:89] found id: ""
	I0416 17:31:41.684560   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.684570   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:41.684580   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:41.684646   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:41.722606   52649 cri.go:89] found id: ""
	I0416 17:31:41.722637   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.722648   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:41.722661   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:41.722781   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:41.767716   52649 cri.go:89] found id: ""
	I0416 17:31:41.767739   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.767763   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:41.767770   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:41.767828   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:41.813599   52649 cri.go:89] found id: ""
	I0416 17:31:41.813617   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.813624   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:41.813629   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:41.813714   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:41.858767   52649 cri.go:89] found id: ""
	I0416 17:31:41.858792   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.858804   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:41.858811   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:41.858871   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:41.904027   52649 cri.go:89] found id: ""
	I0416 17:31:41.904050   52649 logs.go:276] 0 containers: []
	W0416 17:31:41.904060   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:41.904071   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:41.904085   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:41.918636   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:41.918665   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:41.998518   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:41.998542   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:41.998553   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:42.093121   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:42.093156   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:42.134090   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:42.134121   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:44.687535   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:44.702321   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:44.702385   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:44.743679   52649 cri.go:89] found id: ""
	I0416 17:31:44.743710   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.743719   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:44.743725   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:44.743771   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:44.778763   52649 cri.go:89] found id: ""
	I0416 17:31:44.778790   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.778799   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:44.778805   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:44.778858   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:44.814975   52649 cri.go:89] found id: ""
	I0416 17:31:44.815003   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.815016   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:44.815023   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:44.815087   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:44.851641   52649 cri.go:89] found id: ""
	I0416 17:31:44.851669   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.851679   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:44.851687   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:44.851739   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:44.887783   52649 cri.go:89] found id: ""
	I0416 17:31:44.887807   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.887820   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:44.887833   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:44.887878   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:44.923885   52649 cri.go:89] found id: ""
	I0416 17:31:44.923912   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.923921   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:44.923928   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:44.923986   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:44.959072   52649 cri.go:89] found id: ""
	I0416 17:31:44.959094   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.959100   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:44.959107   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:44.959156   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:44.994303   52649 cri.go:89] found id: ""
	I0416 17:31:44.994326   52649 logs.go:276] 0 containers: []
	W0416 17:31:44.994333   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:44.994340   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:44.994354   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:45.068978   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:45.069008   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:45.114729   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:45.114765   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:45.167292   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:45.167316   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:45.181314   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:45.181339   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:45.253873   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:47.754607   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:47.768898   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:47.768974   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:47.806954   52649 cri.go:89] found id: ""
	I0416 17:31:47.806984   52649 logs.go:276] 0 containers: []
	W0416 17:31:47.806993   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:47.807001   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:47.807064   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:47.845448   52649 cri.go:89] found id: ""
	I0416 17:31:47.845473   52649 logs.go:276] 0 containers: []
	W0416 17:31:47.845483   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:47.845490   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:47.845535   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:47.881217   52649 cri.go:89] found id: ""
	I0416 17:31:47.881242   52649 logs.go:276] 0 containers: []
	W0416 17:31:47.881249   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:47.881259   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:47.881312   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:47.918270   52649 cri.go:89] found id: ""
	I0416 17:31:47.918301   52649 logs.go:276] 0 containers: []
	W0416 17:31:47.918311   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:47.918319   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:47.918377   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:47.953189   52649 cri.go:89] found id: ""
	I0416 17:31:47.953222   52649 logs.go:276] 0 containers: []
	W0416 17:31:47.953233   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:47.953242   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:47.953305   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:47.990524   52649 cri.go:89] found id: ""
	I0416 17:31:47.990553   52649 logs.go:276] 0 containers: []
	W0416 17:31:47.990563   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:47.990570   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:47.990626   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:48.029887   52649 cri.go:89] found id: ""
	I0416 17:31:48.029923   52649 logs.go:276] 0 containers: []
	W0416 17:31:48.029934   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:48.029941   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:48.030054   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:48.068252   52649 cri.go:89] found id: ""
	I0416 17:31:48.068282   52649 logs.go:276] 0 containers: []
	W0416 17:31:48.068289   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:48.068297   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:48.068308   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:48.143019   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:48.143034   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:48.143046   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:48.222161   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:48.222205   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:48.266176   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:48.266207   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:48.320881   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:48.320908   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:50.834998   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:50.849482   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:50.849544   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:50.887632   52649 cri.go:89] found id: ""
	I0416 17:31:50.887658   52649 logs.go:276] 0 containers: []
	W0416 17:31:50.887669   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:50.887677   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:50.887732   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:50.927637   52649 cri.go:89] found id: ""
	I0416 17:31:50.927662   52649 logs.go:276] 0 containers: []
	W0416 17:31:50.927669   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:50.927674   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:50.927719   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:50.971732   52649 cri.go:89] found id: ""
	I0416 17:31:50.971754   52649 logs.go:276] 0 containers: []
	W0416 17:31:50.971761   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:50.971766   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:50.971811   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:51.010178   52649 cri.go:89] found id: ""
	I0416 17:31:51.010195   52649 logs.go:276] 0 containers: []
	W0416 17:31:51.010203   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:51.010213   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:51.010255   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:51.046471   52649 cri.go:89] found id: ""
	I0416 17:31:51.046495   52649 logs.go:276] 0 containers: []
	W0416 17:31:51.046502   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:51.046508   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:51.046552   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:51.084145   52649 cri.go:89] found id: ""
	I0416 17:31:51.084168   52649 logs.go:276] 0 containers: []
	W0416 17:31:51.084175   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:51.084181   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:51.084241   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:51.123201   52649 cri.go:89] found id: ""
	I0416 17:31:51.123223   52649 logs.go:276] 0 containers: []
	W0416 17:31:51.123230   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:51.123235   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:51.123276   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:51.162600   52649 cri.go:89] found id: ""
	I0416 17:31:51.162623   52649 logs.go:276] 0 containers: []
	W0416 17:31:51.162632   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:51.162644   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:51.162660   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:51.176534   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:51.176563   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:51.251822   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:51.251839   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:51.251850   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:51.330269   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:51.330304   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:51.372441   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:51.372467   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:53.926265   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:53.942147   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:53.942217   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:53.980858   52649 cri.go:89] found id: ""
	I0416 17:31:53.980886   52649 logs.go:276] 0 containers: []
	W0416 17:31:53.980899   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:53.980907   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:53.980969   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:54.018563   52649 cri.go:89] found id: ""
	I0416 17:31:54.018587   52649 logs.go:276] 0 containers: []
	W0416 17:31:54.018594   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:54.018599   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:54.018649   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:54.057425   52649 cri.go:89] found id: ""
	I0416 17:31:54.057448   52649 logs.go:276] 0 containers: []
	W0416 17:31:54.057455   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:54.057460   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:54.057503   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:54.094530   52649 cri.go:89] found id: ""
	I0416 17:31:54.094550   52649 logs.go:276] 0 containers: []
	W0416 17:31:54.094557   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:54.094562   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:54.094605   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:54.132202   52649 cri.go:89] found id: ""
	I0416 17:31:54.132224   52649 logs.go:276] 0 containers: []
	W0416 17:31:54.132231   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:54.132239   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:54.132289   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:54.181759   52649 cri.go:89] found id: ""
	I0416 17:31:54.181788   52649 logs.go:276] 0 containers: []
	W0416 17:31:54.181795   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:54.181800   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:54.181843   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:54.219566   52649 cri.go:89] found id: ""
	I0416 17:31:54.219594   52649 logs.go:276] 0 containers: []
	W0416 17:31:54.219604   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:54.219611   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:54.219658   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:54.259617   52649 cri.go:89] found id: ""
	I0416 17:31:54.259640   52649 logs.go:276] 0 containers: []
	W0416 17:31:54.259647   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:54.259654   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:54.259664   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:54.312889   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:54.312921   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:54.327211   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:54.327238   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:54.403766   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:54.403786   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:54.403799   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:54.485270   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:54.485300   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:31:57.024628   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:31:57.041124   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:31:57.041188   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:31:57.080462   52649 cri.go:89] found id: ""
	I0416 17:31:57.080484   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.080491   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:31:57.080497   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:31:57.080539   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:31:57.116165   52649 cri.go:89] found id: ""
	I0416 17:31:57.116191   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.116198   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:31:57.116203   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:31:57.116252   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:31:57.151896   52649 cri.go:89] found id: ""
	I0416 17:31:57.151921   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.151932   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:31:57.151938   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:31:57.151996   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:31:57.190875   52649 cri.go:89] found id: ""
	I0416 17:31:57.190898   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.190905   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:31:57.190911   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:31:57.190965   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:31:57.226602   52649 cri.go:89] found id: ""
	I0416 17:31:57.226633   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.226644   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:31:57.226651   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:31:57.226703   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:31:57.264700   52649 cri.go:89] found id: ""
	I0416 17:31:57.264725   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.264733   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:31:57.264738   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:31:57.264799   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:31:57.300118   52649 cri.go:89] found id: ""
	I0416 17:31:57.300142   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.300155   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:31:57.300160   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:31:57.300200   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:31:57.335052   52649 cri.go:89] found id: ""
	I0416 17:31:57.335083   52649 logs.go:276] 0 containers: []
	W0416 17:31:57.335094   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:31:57.335105   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:31:57.335118   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:31:57.388490   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:31:57.388523   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:31:57.402094   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:31:57.402117   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:31:57.473393   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:31:57.473412   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:31:57.473424   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:31:57.548130   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:31:57.548160   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:00.091206   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:00.107731   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:00.107790   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:00.146090   52649 cri.go:89] found id: ""
	I0416 17:32:00.146113   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.146120   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:00.146125   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:00.146168   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:00.184601   52649 cri.go:89] found id: ""
	I0416 17:32:00.184629   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.184640   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:00.184648   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:00.184723   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:00.222611   52649 cri.go:89] found id: ""
	I0416 17:32:00.222629   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.222636   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:00.222640   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:00.222691   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:00.259155   52649 cri.go:89] found id: ""
	I0416 17:32:00.259181   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.259189   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:00.259194   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:00.259239   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:00.295725   52649 cri.go:89] found id: ""
	I0416 17:32:00.295746   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.295753   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:00.295757   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:00.295806   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:00.330652   52649 cri.go:89] found id: ""
	I0416 17:32:00.330674   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.330681   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:00.330687   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:00.330739   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:00.365739   52649 cri.go:89] found id: ""
	I0416 17:32:00.365801   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.365810   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:00.365816   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:00.365862   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:00.404310   52649 cri.go:89] found id: ""
	I0416 17:32:00.404334   52649 logs.go:276] 0 containers: []
	W0416 17:32:00.404342   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:00.404363   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:00.404378   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:00.457994   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:00.458029   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:00.473548   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:00.473576   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:00.547378   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:00.547407   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:00.547424   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:00.624266   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:00.624302   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:03.176212   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:03.189932   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:03.189984   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:03.226812   52649 cri.go:89] found id: ""
	I0416 17:32:03.226836   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.226843   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:03.226848   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:03.226889   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:03.265254   52649 cri.go:89] found id: ""
	I0416 17:32:03.265278   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.265285   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:03.265291   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:03.265345   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:03.302605   52649 cri.go:89] found id: ""
	I0416 17:32:03.302635   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.302645   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:03.302652   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:03.302702   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:03.349948   52649 cri.go:89] found id: ""
	I0416 17:32:03.349987   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.349999   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:03.350009   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:03.350074   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:03.386863   52649 cri.go:89] found id: ""
	I0416 17:32:03.386895   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.386906   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:03.386913   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:03.386966   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:03.424389   52649 cri.go:89] found id: ""
	I0416 17:32:03.424420   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.424429   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:03.424436   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:03.424498   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:03.462795   52649 cri.go:89] found id: ""
	I0416 17:32:03.462832   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.462844   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:03.462853   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:03.462916   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:03.501123   52649 cri.go:89] found id: ""
	I0416 17:32:03.501155   52649 logs.go:276] 0 containers: []
	W0416 17:32:03.501165   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:03.501176   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:03.501191   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:03.553969   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:03.554003   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:03.569417   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:03.569446   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:03.642893   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:03.642922   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:03.642934   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:03.715678   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:03.715710   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:06.260562   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:06.274337   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:06.274388   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:06.309625   52649 cri.go:89] found id: ""
	I0416 17:32:06.309653   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.309662   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:06.309670   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:06.309714   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:06.344702   52649 cri.go:89] found id: ""
	I0416 17:32:06.344725   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.344733   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:06.344739   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:06.344782   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:06.378757   52649 cri.go:89] found id: ""
	I0416 17:32:06.378784   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.378793   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:06.378798   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:06.378847   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:06.415513   52649 cri.go:89] found id: ""
	I0416 17:32:06.415535   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.415543   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:06.415548   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:06.415593   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:06.450017   52649 cri.go:89] found id: ""
	I0416 17:32:06.450041   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.450048   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:06.450054   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:06.450111   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:06.487689   52649 cri.go:89] found id: ""
	I0416 17:32:06.487724   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.487735   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:06.487742   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:06.487815   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:06.522655   52649 cri.go:89] found id: ""
	I0416 17:32:06.522677   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.522683   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:06.522688   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:06.522732   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:06.559362   52649 cri.go:89] found id: ""
	I0416 17:32:06.559388   52649 logs.go:276] 0 containers: []
	W0416 17:32:06.559398   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:06.559409   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:06.559428   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:06.633069   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:06.633097   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:06.676508   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:06.676533   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:06.723667   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:06.723693   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:06.738123   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:06.738144   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:06.814989   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:09.315281   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:09.329828   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:09.329897   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:09.367994   52649 cri.go:89] found id: ""
	I0416 17:32:09.368024   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.368031   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:09.368039   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:09.368095   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:09.404556   52649 cri.go:89] found id: ""
	I0416 17:32:09.404582   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.404592   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:09.404600   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:09.404656   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:09.437841   52649 cri.go:89] found id: ""
	I0416 17:32:09.437869   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.437880   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:09.437887   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:09.437961   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:09.474162   52649 cri.go:89] found id: ""
	I0416 17:32:09.474185   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.474192   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:09.474198   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:09.474261   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:09.508670   52649 cri.go:89] found id: ""
	I0416 17:32:09.508695   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.508704   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:09.508709   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:09.508771   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:09.544878   52649 cri.go:89] found id: ""
	I0416 17:32:09.544897   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.544904   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:09.544910   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:09.544964   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:09.581782   52649 cri.go:89] found id: ""
	I0416 17:32:09.581811   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.581821   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:09.581828   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:09.581875   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:09.616703   52649 cri.go:89] found id: ""
	I0416 17:32:09.616730   52649 logs.go:276] 0 containers: []
	W0416 17:32:09.616740   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:09.616751   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:09.616765   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:09.631495   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:09.631519   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:09.699930   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:09.699957   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:09.699972   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:09.779943   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:09.779973   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:09.820510   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:09.820528   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:12.395511   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:12.410619   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:12.410694   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:12.447199   52649 cri.go:89] found id: ""
	I0416 17:32:12.447221   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.447229   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:12.447234   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:12.447294   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:12.489760   52649 cri.go:89] found id: ""
	I0416 17:32:12.489794   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.489807   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:12.489816   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:12.489879   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:12.527754   52649 cri.go:89] found id: ""
	I0416 17:32:12.527779   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.527787   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:12.527794   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:12.527842   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:12.562138   52649 cri.go:89] found id: ""
	I0416 17:32:12.562167   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.562177   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:12.562184   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:12.562234   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:12.600581   52649 cri.go:89] found id: ""
	I0416 17:32:12.600604   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.600611   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:12.600618   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:12.600704   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:12.646882   52649 cri.go:89] found id: ""
	I0416 17:32:12.646903   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.646911   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:12.646916   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:12.646958   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:12.681555   52649 cri.go:89] found id: ""
	I0416 17:32:12.681585   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.681595   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:12.681601   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:12.681649   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:12.722176   52649 cri.go:89] found id: ""
	I0416 17:32:12.722197   52649 logs.go:276] 0 containers: []
	W0416 17:32:12.722203   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:12.722211   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:12.722221   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:12.736177   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:12.736206   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:12.817317   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:12.817342   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:12.817357   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:12.891806   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:12.891836   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:12.933138   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:12.933167   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:15.485179   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:15.499894   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:15.499949   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:15.539194   52649 cri.go:89] found id: ""
	I0416 17:32:15.539216   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.539224   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:15.539229   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:15.539275   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:15.592816   52649 cri.go:89] found id: ""
	I0416 17:32:15.592887   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.592899   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:15.592906   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:15.592966   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:15.633984   52649 cri.go:89] found id: ""
	I0416 17:32:15.634015   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.634024   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:15.634030   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:15.634078   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:15.673768   52649 cri.go:89] found id: ""
	I0416 17:32:15.673794   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.673802   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:15.673808   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:15.673864   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:15.711911   52649 cri.go:89] found id: ""
	I0416 17:32:15.711939   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.711946   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:15.711951   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:15.711998   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:15.750242   52649 cri.go:89] found id: ""
	I0416 17:32:15.750270   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.750279   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:15.750285   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:15.750334   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:15.786804   52649 cri.go:89] found id: ""
	I0416 17:32:15.786827   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.786835   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:15.786840   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:15.786884   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:15.822926   52649 cri.go:89] found id: ""
	I0416 17:32:15.822957   52649 logs.go:276] 0 containers: []
	W0416 17:32:15.822967   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:15.822979   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:15.823001   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:15.877169   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:15.877198   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:15.891260   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:15.891284   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:15.967523   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:15.967545   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:15.967558   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:16.042533   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:16.042565   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:18.582495   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:18.597166   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:18.597222   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:18.635304   52649 cri.go:89] found id: ""
	I0416 17:32:18.635330   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.635340   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:18.635347   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:18.635398   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:18.677854   52649 cri.go:89] found id: ""
	I0416 17:32:18.677881   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.677889   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:18.677895   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:18.677949   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:18.715617   52649 cri.go:89] found id: ""
	I0416 17:32:18.715643   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.715650   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:18.715656   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:18.715706   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:18.753941   52649 cri.go:89] found id: ""
	I0416 17:32:18.753964   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.753972   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:18.753980   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:18.754035   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:18.788667   52649 cri.go:89] found id: ""
	I0416 17:32:18.788693   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.788700   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:18.788705   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:18.788757   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:18.827062   52649 cri.go:89] found id: ""
	I0416 17:32:18.827093   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.827103   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:18.827111   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:18.827168   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:18.862251   52649 cri.go:89] found id: ""
	I0416 17:32:18.862271   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.862278   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:18.862282   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:18.862327   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:18.897617   52649 cri.go:89] found id: ""
	I0416 17:32:18.897639   52649 logs.go:276] 0 containers: []
	W0416 17:32:18.897646   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:18.897654   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:18.897664   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:18.911123   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:18.911144   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:18.987020   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:18.987046   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:18.987067   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:19.069924   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:19.069962   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:19.112214   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:19.112247   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:21.662523   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:21.677397   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:21.677447   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:21.714838   52649 cri.go:89] found id: ""
	I0416 17:32:21.714861   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.714868   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:21.714873   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:21.714916   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:21.751886   52649 cri.go:89] found id: ""
	I0416 17:32:21.751912   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.751920   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:21.751925   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:21.751969   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:21.786973   52649 cri.go:89] found id: ""
	I0416 17:32:21.787009   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.787020   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:21.787027   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:21.787086   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:21.822352   52649 cri.go:89] found id: ""
	I0416 17:32:21.822378   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.822388   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:21.822395   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:21.822453   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:21.861374   52649 cri.go:89] found id: ""
	I0416 17:32:21.861417   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.861436   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:21.861444   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:21.861511   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:21.896443   52649 cri.go:89] found id: ""
	I0416 17:32:21.896470   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.896479   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:21.896485   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:21.896527   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:21.939309   52649 cri.go:89] found id: ""
	I0416 17:32:21.939333   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.939340   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:21.939345   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:21.939389   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:21.989234   52649 cri.go:89] found id: ""
	I0416 17:32:21.989258   52649 logs.go:276] 0 containers: []
	W0416 17:32:21.989265   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:21.989274   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:21.989288   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:22.077020   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:22.077058   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:22.121349   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:22.121379   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:22.171426   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:22.171454   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:22.185127   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:22.185150   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:22.258614   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:24.759395   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:24.778940   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:24.779012   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:24.822802   52649 cri.go:89] found id: ""
	I0416 17:32:24.822829   52649 logs.go:276] 0 containers: []
	W0416 17:32:24.822839   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:24.822847   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:24.822907   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:24.867465   52649 cri.go:89] found id: ""
	I0416 17:32:24.867492   52649 logs.go:276] 0 containers: []
	W0416 17:32:24.867502   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:24.867509   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:24.867570   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:24.916533   52649 cri.go:89] found id: ""
	I0416 17:32:24.916565   52649 logs.go:276] 0 containers: []
	W0416 17:32:24.916577   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:24.916584   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:24.916659   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:24.954499   52649 cri.go:89] found id: ""
	I0416 17:32:24.954525   52649 logs.go:276] 0 containers: []
	W0416 17:32:24.954539   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:24.954548   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:24.954602   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:25.003843   52649 cri.go:89] found id: ""
	I0416 17:32:25.003868   52649 logs.go:276] 0 containers: []
	W0416 17:32:25.003876   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:25.003881   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:25.003938   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:25.045085   52649 cri.go:89] found id: ""
	I0416 17:32:25.045114   52649 logs.go:276] 0 containers: []
	W0416 17:32:25.045123   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:25.045129   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:25.045187   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:25.081858   52649 cri.go:89] found id: ""
	I0416 17:32:25.081891   52649 logs.go:276] 0 containers: []
	W0416 17:32:25.081902   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:25.081910   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:25.081967   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:25.120288   52649 cri.go:89] found id: ""
	I0416 17:32:25.120315   52649 logs.go:276] 0 containers: []
	W0416 17:32:25.120331   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:25.120342   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:25.120354   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:25.174268   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:25.174305   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:25.188532   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:25.188554   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:25.261718   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:25.261739   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:25.261754   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:25.340712   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:25.340745   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:27.891054   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:27.906452   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:27.906500   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:27.944441   52649 cri.go:89] found id: ""
	I0416 17:32:27.944460   52649 logs.go:276] 0 containers: []
	W0416 17:32:27.944468   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:27.944473   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:27.944526   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:27.982504   52649 cri.go:89] found id: ""
	I0416 17:32:27.982527   52649 logs.go:276] 0 containers: []
	W0416 17:32:27.982535   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:27.982542   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:27.982586   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:28.023258   52649 cri.go:89] found id: ""
	I0416 17:32:28.023279   52649 logs.go:276] 0 containers: []
	W0416 17:32:28.023287   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:28.023294   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:28.023350   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:28.062809   52649 cri.go:89] found id: ""
	I0416 17:32:28.062831   52649 logs.go:276] 0 containers: []
	W0416 17:32:28.062838   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:28.062843   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:28.062895   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:28.099011   52649 cri.go:89] found id: ""
	I0416 17:32:28.099036   52649 logs.go:276] 0 containers: []
	W0416 17:32:28.099043   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:28.099048   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:28.099090   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:28.134175   52649 cri.go:89] found id: ""
	I0416 17:32:28.134203   52649 logs.go:276] 0 containers: []
	W0416 17:32:28.134212   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:28.134217   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:28.134259   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:28.175067   52649 cri.go:89] found id: ""
	I0416 17:32:28.175090   52649 logs.go:276] 0 containers: []
	W0416 17:32:28.175096   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:28.175104   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:28.175151   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:28.215747   52649 cri.go:89] found id: ""
	I0416 17:32:28.215769   52649 logs.go:276] 0 containers: []
	W0416 17:32:28.215776   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:28.215783   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:28.215794   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:28.288467   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:28.288489   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:28.288502   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:28.364812   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:28.364850   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:28.407595   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:28.407621   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:28.460415   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:28.460448   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:30.975474   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:30.990446   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:30.990511   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:31.030564   52649 cri.go:89] found id: ""
	I0416 17:32:31.030592   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.030608   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:31.030616   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:31.030677   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:31.068285   52649 cri.go:89] found id: ""
	I0416 17:32:31.068315   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.068335   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:31.068342   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:31.068398   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:31.103525   52649 cri.go:89] found id: ""
	I0416 17:32:31.103553   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.103563   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:31.103570   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:31.103635   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:31.138350   52649 cri.go:89] found id: ""
	I0416 17:32:31.138378   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.138387   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:31.138393   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:31.138447   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:31.174891   52649 cri.go:89] found id: ""
	I0416 17:32:31.174915   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.174923   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:31.174928   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:31.174982   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:31.211969   52649 cri.go:89] found id: ""
	I0416 17:32:31.211997   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.212008   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:31.212015   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:31.212068   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:31.249278   52649 cri.go:89] found id: ""
	I0416 17:32:31.249304   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.249315   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:31.249322   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:31.249379   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:31.285486   52649 cri.go:89] found id: ""
	I0416 17:32:31.285512   52649 logs.go:276] 0 containers: []
	W0416 17:32:31.285522   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:31.285533   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:31.285549   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:31.341213   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:31.341240   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:31.355061   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:31.355084   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:31.439218   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:31.439239   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:31.439254   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:31.520205   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:31.520240   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:34.070488   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:34.084911   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:34.084967   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:34.123040   52649 cri.go:89] found id: ""
	I0416 17:32:34.123068   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.123078   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:34.123085   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:34.123149   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:34.159758   52649 cri.go:89] found id: ""
	I0416 17:32:34.159789   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.159797   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:34.159803   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:34.159852   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:34.195994   52649 cri.go:89] found id: ""
	I0416 17:32:34.196015   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.196022   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:34.196027   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:34.196072   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:34.234661   52649 cri.go:89] found id: ""
	I0416 17:32:34.234691   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.234699   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:34.234706   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:34.234767   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:34.270969   52649 cri.go:89] found id: ""
	I0416 17:32:34.270999   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.271010   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:34.271017   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:34.271079   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:34.306440   52649 cri.go:89] found id: ""
	I0416 17:32:34.306470   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.306481   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:34.306488   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:34.306554   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:34.348523   52649 cri.go:89] found id: ""
	I0416 17:32:34.348551   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.348563   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:34.348570   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:34.348629   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:34.389360   52649 cri.go:89] found id: ""
	I0416 17:32:34.389389   52649 logs.go:276] 0 containers: []
	W0416 17:32:34.389400   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:34.389411   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:34.389424   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:34.429769   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:34.429793   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:34.482814   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:34.482846   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:34.497008   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:34.497033   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:34.570258   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:34.570282   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:34.570293   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:37.153083   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:37.167966   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:37.168023   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:37.204122   52649 cri.go:89] found id: ""
	I0416 17:32:37.204159   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.204168   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:37.204173   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:37.204220   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:37.241604   52649 cri.go:89] found id: ""
	I0416 17:32:37.241629   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.241636   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:37.241642   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:37.241683   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:37.277379   52649 cri.go:89] found id: ""
	I0416 17:32:37.277400   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.277408   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:37.277414   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:37.277461   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:37.313038   52649 cri.go:89] found id: ""
	I0416 17:32:37.313062   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.313069   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:37.313074   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:37.313117   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:37.348956   52649 cri.go:89] found id: ""
	I0416 17:32:37.348983   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.348991   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:37.348996   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:37.349046   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:37.386753   52649 cri.go:89] found id: ""
	I0416 17:32:37.386781   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.386791   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:37.386799   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:37.386857   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:37.426207   52649 cri.go:89] found id: ""
	I0416 17:32:37.426232   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.426239   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:37.426244   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:37.426286   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:37.462143   52649 cri.go:89] found id: ""
	I0416 17:32:37.462168   52649 logs.go:276] 0 containers: []
	W0416 17:32:37.462175   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:37.462183   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:37.462193   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:37.514281   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:37.514305   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:37.528494   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:37.528518   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:37.597790   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:37.597810   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:37.597822   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:37.679197   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:37.679225   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:40.220954   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:40.237379   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:40.237446   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:40.276602   52649 cri.go:89] found id: ""
	I0416 17:32:40.276630   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.276640   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:40.276647   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:40.276711   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:40.314378   52649 cri.go:89] found id: ""
	I0416 17:32:40.314404   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.314415   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:40.314422   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:40.314479   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:40.356456   52649 cri.go:89] found id: ""
	I0416 17:32:40.356483   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.356492   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:40.356499   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:40.356554   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:40.394163   52649 cri.go:89] found id: ""
	I0416 17:32:40.394190   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.394200   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:40.394207   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:40.394267   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:40.431136   52649 cri.go:89] found id: ""
	I0416 17:32:40.431166   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.431175   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:40.431183   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:40.431248   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:40.468852   52649 cri.go:89] found id: ""
	I0416 17:32:40.468874   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.468884   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:40.468892   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:40.468947   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:40.506668   52649 cri.go:89] found id: ""
	I0416 17:32:40.506693   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.506701   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:40.506706   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:40.506750   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:40.550504   52649 cri.go:89] found id: ""
	I0416 17:32:40.550530   52649 logs.go:276] 0 containers: []
	W0416 17:32:40.550537   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:40.550547   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:40.550563   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:40.608131   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:40.608161   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:40.622434   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:40.622456   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:40.693095   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:40.693117   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:40.693128   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:40.774556   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:40.774586   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:43.320484   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:43.335152   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:43.335219   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:43.371662   52649 cri.go:89] found id: ""
	I0416 17:32:43.371685   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.371693   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:43.371698   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:43.371743   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:43.408105   52649 cri.go:89] found id: ""
	I0416 17:32:43.408124   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.408130   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:43.408141   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:43.408186   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:43.445002   52649 cri.go:89] found id: ""
	I0416 17:32:43.445027   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.445036   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:43.445042   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:43.445093   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:43.482337   52649 cri.go:89] found id: ""
	I0416 17:32:43.482366   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.482379   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:43.482385   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:43.482433   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:43.517211   52649 cri.go:89] found id: ""
	I0416 17:32:43.517236   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.517243   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:43.517248   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:43.517300   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:43.552645   52649 cri.go:89] found id: ""
	I0416 17:32:43.552676   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.552686   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:43.552702   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:43.552764   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:43.592320   52649 cri.go:89] found id: ""
	I0416 17:32:43.592346   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.592357   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:43.592365   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:43.592421   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:43.628435   52649 cri.go:89] found id: ""
	I0416 17:32:43.628459   52649 logs.go:276] 0 containers: []
	W0416 17:32:43.628468   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:43.628477   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:43.628492   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:43.678876   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:43.678904   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:43.693051   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:43.693076   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:43.765041   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:43.765063   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:43.765077   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:43.842526   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:43.842599   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:46.390825   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:46.406682   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:46.406741   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:46.446444   52649 cri.go:89] found id: ""
	I0416 17:32:46.446472   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.446480   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:46.446486   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:46.446538   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:46.483128   52649 cri.go:89] found id: ""
	I0416 17:32:46.483152   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.483163   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:46.483170   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:46.483228   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:46.520299   52649 cri.go:89] found id: ""
	I0416 17:32:46.520326   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.520338   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:46.520345   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:46.520403   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:46.559498   52649 cri.go:89] found id: ""
	I0416 17:32:46.559528   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.559539   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:46.559546   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:46.559604   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:46.599824   52649 cri.go:89] found id: ""
	I0416 17:32:46.599853   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.599866   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:46.599871   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:46.599918   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:46.636739   52649 cri.go:89] found id: ""
	I0416 17:32:46.636774   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.636782   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:46.636788   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:46.636847   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:46.673717   52649 cri.go:89] found id: ""
	I0416 17:32:46.673748   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.673760   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:46.673769   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:46.673836   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:46.711035   52649 cri.go:89] found id: ""
	I0416 17:32:46.711064   52649 logs.go:276] 0 containers: []
	W0416 17:32:46.711075   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:46.711086   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:46.711102   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:46.764700   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:46.764727   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:46.778792   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:46.778815   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:46.849257   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:46.849275   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:46.849286   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:46.932405   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:46.932433   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:49.474546   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:49.491375   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:49.491448   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:49.531757   52649 cri.go:89] found id: ""
	I0416 17:32:49.531786   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.531798   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:49.531806   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:49.531869   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:49.568431   52649 cri.go:89] found id: ""
	I0416 17:32:49.568452   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.568460   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:49.568465   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:49.568527   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:49.606355   52649 cri.go:89] found id: ""
	I0416 17:32:49.606381   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.606388   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:49.606393   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:49.606451   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:49.642708   52649 cri.go:89] found id: ""
	I0416 17:32:49.642733   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.642742   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:49.642751   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:49.642825   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:49.676992   52649 cri.go:89] found id: ""
	I0416 17:32:49.677018   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.677027   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:49.677034   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:49.677096   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:49.712320   52649 cri.go:89] found id: ""
	I0416 17:32:49.712344   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.712351   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:49.712356   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:49.712399   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:49.748972   52649 cri.go:89] found id: ""
	I0416 17:32:49.748995   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.749003   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:49.749008   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:49.749097   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:49.783650   52649 cri.go:89] found id: ""
	I0416 17:32:49.783673   52649 logs.go:276] 0 containers: []
	W0416 17:32:49.783680   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:49.783688   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:49.783698   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:49.797639   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:49.797661   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:49.872168   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:49.872191   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:49.872206   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:49.955037   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:49.955065   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:49.995673   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:49.995701   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:52.551360   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:52.566267   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:52.566332   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:52.603452   52649 cri.go:89] found id: ""
	I0416 17:32:52.603482   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.603491   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:52.603496   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:52.603548   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:52.640275   52649 cri.go:89] found id: ""
	I0416 17:32:52.640304   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.640317   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:52.640324   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:52.640387   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:52.677992   52649 cri.go:89] found id: ""
	I0416 17:32:52.678027   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.678039   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:52.678051   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:52.678113   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:52.714411   52649 cri.go:89] found id: ""
	I0416 17:32:52.714440   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.714451   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:52.714458   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:52.714521   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:52.751507   52649 cri.go:89] found id: ""
	I0416 17:32:52.751537   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.751548   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:52.751556   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:52.751619   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:52.788924   52649 cri.go:89] found id: ""
	I0416 17:32:52.788951   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.788962   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:52.788974   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:52.789035   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:52.824262   52649 cri.go:89] found id: ""
	I0416 17:32:52.824289   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.824298   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:52.824305   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:52.824360   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:52.859585   52649 cri.go:89] found id: ""
	I0416 17:32:52.859613   52649 logs.go:276] 0 containers: []
	W0416 17:32:52.859623   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:52.859634   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:52.859650   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:52.946890   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:52.946920   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:52.986018   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:52.986047   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:53.038382   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:53.038413   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:53.052457   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:53.052482   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:53.123524   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:55.623738   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:55.638818   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:55.638877   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:55.675582   52649 cri.go:89] found id: ""
	I0416 17:32:55.675610   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.675620   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:55.675628   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:55.675689   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:55.715683   52649 cri.go:89] found id: ""
	I0416 17:32:55.715711   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.715720   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:55.715728   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:55.715784   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:55.761543   52649 cri.go:89] found id: ""
	I0416 17:32:55.761569   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.761578   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:55.761586   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:55.761633   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:55.798064   52649 cri.go:89] found id: ""
	I0416 17:32:55.798092   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.798102   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:55.798109   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:55.798165   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:55.833660   52649 cri.go:89] found id: ""
	I0416 17:32:55.833692   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.833703   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:55.833709   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:55.833758   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:55.869966   52649 cri.go:89] found id: ""
	I0416 17:32:55.869993   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.870003   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:55.870010   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:55.870080   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:55.910606   52649 cri.go:89] found id: ""
	I0416 17:32:55.910635   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.910645   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:55.910653   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:55.910712   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:55.946314   52649 cri.go:89] found id: ""
	I0416 17:32:55.946340   52649 logs.go:276] 0 containers: []
	W0416 17:32:55.946350   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:55.946361   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:55.946376   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:55.987416   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:55.987444   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:56.039481   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:56.039512   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:56.053797   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:56.053824   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:56.138362   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:32:56.138383   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:56.138398   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:58.717503   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:32:58.732613   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:32:58.732689   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:32:58.769207   52649 cri.go:89] found id: ""
	I0416 17:32:58.769237   52649 logs.go:276] 0 containers: []
	W0416 17:32:58.769249   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:32:58.769257   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:32:58.769315   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:32:58.819880   52649 cri.go:89] found id: ""
	I0416 17:32:58.819899   52649 logs.go:276] 0 containers: []
	W0416 17:32:58.819906   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:32:58.819910   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:32:58.819964   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:32:58.858673   52649 cri.go:89] found id: ""
	I0416 17:32:58.858696   52649 logs.go:276] 0 containers: []
	W0416 17:32:58.858704   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:32:58.858709   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:32:58.858759   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:32:58.898666   52649 cri.go:89] found id: ""
	I0416 17:32:58.898687   52649 logs.go:276] 0 containers: []
	W0416 17:32:58.898694   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:32:58.898698   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:32:58.898756   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:32:58.938949   52649 cri.go:89] found id: ""
	I0416 17:32:58.938969   52649 logs.go:276] 0 containers: []
	W0416 17:32:58.938976   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:32:58.938981   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:32:58.939026   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:32:58.973898   52649 cri.go:89] found id: ""
	I0416 17:32:58.973914   52649 logs.go:276] 0 containers: []
	W0416 17:32:58.973920   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:32:58.973925   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:32:58.973965   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:32:59.009207   52649 cri.go:89] found id: ""
	I0416 17:32:59.009223   52649 logs.go:276] 0 containers: []
	W0416 17:32:59.009229   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:32:59.009234   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:32:59.009278   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:32:59.043090   52649 cri.go:89] found id: ""
	I0416 17:32:59.043110   52649 logs.go:276] 0 containers: []
	W0416 17:32:59.043118   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:32:59.043126   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:32:59.043142   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:32:59.126745   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:32:59.126780   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:32:59.173705   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:32:59.173730   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:32:59.224751   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:32:59.224776   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:32:59.239442   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:32:59.239466   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:32:59.317630   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:01.818086   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:01.840282   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:01.840344   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:01.883743   52649 cri.go:89] found id: ""
	I0416 17:33:01.883773   52649 logs.go:276] 0 containers: []
	W0416 17:33:01.883780   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:01.883787   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:01.883871   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:01.921331   52649 cri.go:89] found id: ""
	I0416 17:33:01.921354   52649 logs.go:276] 0 containers: []
	W0416 17:33:01.921368   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:01.921375   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:01.921435   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:01.958051   52649 cri.go:89] found id: ""
	I0416 17:33:01.958073   52649 logs.go:276] 0 containers: []
	W0416 17:33:01.958080   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:01.958085   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:01.958131   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:01.992931   52649 cri.go:89] found id: ""
	I0416 17:33:01.992957   52649 logs.go:276] 0 containers: []
	W0416 17:33:01.992964   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:01.992969   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:01.993013   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:02.027655   52649 cri.go:89] found id: ""
	I0416 17:33:02.027680   52649 logs.go:276] 0 containers: []
	W0416 17:33:02.027688   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:02.027693   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:02.027736   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:02.062727   52649 cri.go:89] found id: ""
	I0416 17:33:02.062748   52649 logs.go:276] 0 containers: []
	W0416 17:33:02.062755   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:02.062761   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:02.062813   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:02.102050   52649 cri.go:89] found id: ""
	I0416 17:33:02.102075   52649 logs.go:276] 0 containers: []
	W0416 17:33:02.102082   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:02.102087   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:02.102129   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:02.139055   52649 cri.go:89] found id: ""
	I0416 17:33:02.139085   52649 logs.go:276] 0 containers: []
	W0416 17:33:02.139094   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:02.139103   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:02.139116   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:02.211714   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:02.211741   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:02.211755   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:02.290386   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:02.290417   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:02.334258   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:02.334288   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:02.384153   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:02.384181   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:04.899324   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:04.913119   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:04.913171   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:04.951096   52649 cri.go:89] found id: ""
	I0416 17:33:04.951118   52649 logs.go:276] 0 containers: []
	W0416 17:33:04.951128   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:04.951135   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:04.951198   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:04.988596   52649 cri.go:89] found id: ""
	I0416 17:33:04.988619   52649 logs.go:276] 0 containers: []
	W0416 17:33:04.988628   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:04.988635   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:04.988691   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:05.026166   52649 cri.go:89] found id: ""
	I0416 17:33:05.026190   52649 logs.go:276] 0 containers: []
	W0416 17:33:05.026200   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:05.026206   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:05.026260   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:05.068891   52649 cri.go:89] found id: ""
	I0416 17:33:05.068918   52649 logs.go:276] 0 containers: []
	W0416 17:33:05.068925   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:05.068931   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:05.068979   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:05.111883   52649 cri.go:89] found id: ""
	I0416 17:33:05.111904   52649 logs.go:276] 0 containers: []
	W0416 17:33:05.111914   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:05.111922   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:05.111976   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:05.146305   52649 cri.go:89] found id: ""
	I0416 17:33:05.146332   52649 logs.go:276] 0 containers: []
	W0416 17:33:05.146343   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:05.146350   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:05.146409   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:05.187729   52649 cri.go:89] found id: ""
	I0416 17:33:05.187755   52649 logs.go:276] 0 containers: []
	W0416 17:33:05.187762   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:05.187767   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:05.187822   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:05.223699   52649 cri.go:89] found id: ""
	I0416 17:33:05.223726   52649 logs.go:276] 0 containers: []
	W0416 17:33:05.223737   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:05.223746   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:05.223761   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:05.273833   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:05.273857   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:05.288341   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:05.288364   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:05.359906   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:05.359929   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:05.359942   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:05.438114   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:05.438145   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:07.979852   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:07.996472   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:07.996521   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:08.041643   52649 cri.go:89] found id: ""
	I0416 17:33:08.041662   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.041669   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:08.041675   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:08.041736   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:08.080202   52649 cri.go:89] found id: ""
	I0416 17:33:08.080230   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.080246   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:08.080253   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:08.080313   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:08.119065   52649 cri.go:89] found id: ""
	I0416 17:33:08.119090   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.119101   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:08.119108   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:08.119170   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:08.157047   52649 cri.go:89] found id: ""
	I0416 17:33:08.157073   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.157082   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:08.157089   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:08.157165   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:08.193799   52649 cri.go:89] found id: ""
	I0416 17:33:08.193817   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.193825   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:08.193829   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:08.193882   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:08.248634   52649 cri.go:89] found id: ""
	I0416 17:33:08.248662   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.248674   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:08.248682   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:08.248733   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:08.306904   52649 cri.go:89] found id: ""
	I0416 17:33:08.306931   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.306941   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:08.306948   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:08.307007   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:08.363299   52649 cri.go:89] found id: ""
	I0416 17:33:08.363330   52649 logs.go:276] 0 containers: []
	W0416 17:33:08.363341   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:08.363352   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:08.363368   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:08.416455   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:08.416481   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:08.430405   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:08.430432   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:08.504539   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:08.504561   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:08.504578   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:08.584942   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:08.584971   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:11.127916   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:11.142747   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:11.142812   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:11.182980   52649 cri.go:89] found id: ""
	I0416 17:33:11.183001   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.183008   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:11.183013   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:11.183069   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:11.220490   52649 cri.go:89] found id: ""
	I0416 17:33:11.220523   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.220534   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:11.220541   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:11.220598   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:11.258469   52649 cri.go:89] found id: ""
	I0416 17:33:11.258496   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.258506   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:11.258513   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:11.258577   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:11.295759   52649 cri.go:89] found id: ""
	I0416 17:33:11.295789   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.295799   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:11.295806   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:11.295872   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:11.331560   52649 cri.go:89] found id: ""
	I0416 17:33:11.331584   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.331594   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:11.331600   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:11.331655   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:11.365466   52649 cri.go:89] found id: ""
	I0416 17:33:11.365495   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.365509   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:11.365517   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:11.365574   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:11.399379   52649 cri.go:89] found id: ""
	I0416 17:33:11.399398   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.399405   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:11.399410   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:11.399463   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:11.433188   52649 cri.go:89] found id: ""
	I0416 17:33:11.433213   52649 logs.go:276] 0 containers: []
	W0416 17:33:11.433223   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:11.433232   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:11.433245   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:11.492199   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:11.492228   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:11.507790   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:11.507818   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:11.593437   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:11.593457   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:11.593471   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:11.668601   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:11.668631   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:14.211399   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:14.228632   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:14.228707   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:14.280409   52649 cri.go:89] found id: ""
	I0416 17:33:14.280435   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.280443   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:14.280449   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:14.280505   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:14.325420   52649 cri.go:89] found id: ""
	I0416 17:33:14.325451   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.325460   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:14.325468   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:14.325529   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:14.375948   52649 cri.go:89] found id: ""
	I0416 17:33:14.375979   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.375992   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:14.376000   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:14.376058   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:14.414023   52649 cri.go:89] found id: ""
	I0416 17:33:14.414054   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.414064   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:14.414072   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:14.414134   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:14.453656   52649 cri.go:89] found id: ""
	I0416 17:33:14.453682   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.453692   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:14.453698   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:14.453743   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:14.488778   52649 cri.go:89] found id: ""
	I0416 17:33:14.488807   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.488818   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:14.488852   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:14.488911   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:14.523138   52649 cri.go:89] found id: ""
	I0416 17:33:14.523167   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.523173   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:14.523179   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:14.523224   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:14.558710   52649 cri.go:89] found id: ""
	I0416 17:33:14.558735   52649 logs.go:276] 0 containers: []
	W0416 17:33:14.558742   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:14.558750   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:14.558760   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:14.574555   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:14.574580   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:14.643653   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:14.643671   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:14.643686   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:14.719304   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:14.719333   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:14.759726   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:14.759752   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:17.311977   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:17.325724   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:17.325780   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:17.362932   52649 cri.go:89] found id: ""
	I0416 17:33:17.362963   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.362974   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:17.362982   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:17.363042   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:17.399340   52649 cri.go:89] found id: ""
	I0416 17:33:17.399364   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.399372   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:17.399377   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:17.399420   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:17.438912   52649 cri.go:89] found id: ""
	I0416 17:33:17.438933   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.438941   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:17.438946   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:17.438995   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:17.475369   52649 cri.go:89] found id: ""
	I0416 17:33:17.475401   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.475412   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:17.475420   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:17.475478   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:17.514357   52649 cri.go:89] found id: ""
	I0416 17:33:17.514380   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.514388   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:17.514393   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:17.514437   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:17.549037   52649 cri.go:89] found id: ""
	I0416 17:33:17.549060   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.549068   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:17.549073   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:17.549121   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:17.587240   52649 cri.go:89] found id: ""
	I0416 17:33:17.587272   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.587282   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:17.587290   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:17.587352   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:17.621213   52649 cri.go:89] found id: ""
	I0416 17:33:17.621238   52649 logs.go:276] 0 containers: []
	W0416 17:33:17.621245   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:17.621253   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:17.621264   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:17.672533   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:17.672561   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:17.687766   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:17.687787   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:17.759571   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:17.759592   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:17.759608   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:17.834830   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:17.834860   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:20.377827   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:20.391036   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:20.391103   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:20.427645   52649 cri.go:89] found id: ""
	I0416 17:33:20.427673   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.427685   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:20.427695   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:20.427744   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:20.464153   52649 cri.go:89] found id: ""
	I0416 17:33:20.464179   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.464186   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:20.464192   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:20.464244   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:20.500864   52649 cri.go:89] found id: ""
	I0416 17:33:20.500887   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.500894   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:20.500900   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:20.500959   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:20.538931   52649 cri.go:89] found id: ""
	I0416 17:33:20.538957   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.538964   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:20.538970   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:20.539024   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:20.575275   52649 cri.go:89] found id: ""
	I0416 17:33:20.575308   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.575319   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:20.575326   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:20.575379   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:20.611270   52649 cri.go:89] found id: ""
	I0416 17:33:20.611297   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.611309   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:20.611316   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:20.611376   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:20.651276   52649 cri.go:89] found id: ""
	I0416 17:33:20.651297   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.651305   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:20.651310   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:20.651355   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:20.688493   52649 cri.go:89] found id: ""
	I0416 17:33:20.688520   52649 logs.go:276] 0 containers: []
	W0416 17:33:20.688530   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:20.688541   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:20.688566   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:20.703457   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:20.703486   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:20.777460   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:20.777481   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:20.777501   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:20.857468   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:20.857499   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:20.900542   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:20.900566   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:23.450553   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:23.465457   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:23.465508   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:23.501420   52649 cri.go:89] found id: ""
	I0416 17:33:23.501448   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.501458   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:23.501466   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:23.501513   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:23.537403   52649 cri.go:89] found id: ""
	I0416 17:33:23.537435   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.537445   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:23.537452   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:23.537509   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:23.578136   52649 cri.go:89] found id: ""
	I0416 17:33:23.578161   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.578170   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:23.578177   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:23.578230   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:23.615105   52649 cri.go:89] found id: ""
	I0416 17:33:23.615129   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.615136   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:23.615142   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:23.615191   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:23.651327   52649 cri.go:89] found id: ""
	I0416 17:33:23.651352   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.651362   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:23.651370   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:23.651418   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:23.685677   52649 cri.go:89] found id: ""
	I0416 17:33:23.685706   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.685717   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:23.685724   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:23.685784   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:23.721650   52649 cri.go:89] found id: ""
	I0416 17:33:23.721674   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.721681   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:23.721686   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:23.721738   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:23.759271   52649 cri.go:89] found id: ""
	I0416 17:33:23.759300   52649 logs.go:276] 0 containers: []
	W0416 17:33:23.759307   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:23.759365   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:23.759378   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:23.806950   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:23.806976   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:23.821357   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:23.821380   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:23.894551   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:23.894569   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:23.894580   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:23.973259   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:23.973291   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:26.516218   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:26.529749   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:26.529808   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:26.572756   52649 cri.go:89] found id: ""
	I0416 17:33:26.572791   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.572856   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:26.572894   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:26.572970   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:26.610782   52649 cri.go:89] found id: ""
	I0416 17:33:26.610813   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.610824   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:26.610832   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:26.610904   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:26.647738   52649 cri.go:89] found id: ""
	I0416 17:33:26.647765   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.647775   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:26.647781   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:26.647826   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:26.683390   52649 cri.go:89] found id: ""
	I0416 17:33:26.683413   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.683420   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:26.683425   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:26.683501   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:26.720682   52649 cri.go:89] found id: ""
	I0416 17:33:26.720721   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.720732   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:26.720739   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:26.720807   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:26.756552   52649 cri.go:89] found id: ""
	I0416 17:33:26.756584   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.756595   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:26.756604   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:26.756677   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:26.792606   52649 cri.go:89] found id: ""
	I0416 17:33:26.792635   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.792646   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:26.792652   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:26.792698   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:26.828024   52649 cri.go:89] found id: ""
	I0416 17:33:26.828052   52649 logs.go:276] 0 containers: []
	W0416 17:33:26.828063   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:26.828074   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:26.828089   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:26.911948   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:26.911982   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:26.959363   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:26.959400   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:27.012799   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:27.012825   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:27.027654   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:27.027681   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:27.096181   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:29.596580   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:29.610694   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:29.610763   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:29.651901   52649 cri.go:89] found id: ""
	I0416 17:33:29.651929   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.651941   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:29.651948   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:29.652012   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:29.689305   52649 cri.go:89] found id: ""
	I0416 17:33:29.689344   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.689355   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:29.689362   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:29.689420   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:29.726311   52649 cri.go:89] found id: ""
	I0416 17:33:29.726344   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.726354   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:29.726361   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:29.726422   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:29.764714   52649 cri.go:89] found id: ""
	I0416 17:33:29.764739   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.764746   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:29.764751   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:29.764797   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:29.799490   52649 cri.go:89] found id: ""
	I0416 17:33:29.799520   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.799530   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:29.799537   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:29.799603   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:29.835850   52649 cri.go:89] found id: ""
	I0416 17:33:29.835871   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.835879   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:29.835883   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:29.835925   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:29.870921   52649 cri.go:89] found id: ""
	I0416 17:33:29.870946   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.870956   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:29.870963   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:29.871006   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:29.912766   52649 cri.go:89] found id: ""
	I0416 17:33:29.912794   52649 logs.go:276] 0 containers: []
	W0416 17:33:29.912806   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:29.912818   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:29.912844   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:29.927529   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:29.927554   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:29.998694   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:29.998714   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:29.998729   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:30.078731   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:30.078765   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:30.121625   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:30.121648   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:32.674484   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:32.688703   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:32.688782   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:32.724978   52649 cri.go:89] found id: ""
	I0416 17:33:32.725007   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.725014   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:32.725020   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:32.725062   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:32.760346   52649 cri.go:89] found id: ""
	I0416 17:33:32.760369   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.760379   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:32.760386   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:32.760442   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:32.796278   52649 cri.go:89] found id: ""
	I0416 17:33:32.796306   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.796314   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:32.796319   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:32.796361   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:32.830078   52649 cri.go:89] found id: ""
	I0416 17:33:32.830109   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.830121   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:32.830128   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:32.830175   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:32.864656   52649 cri.go:89] found id: ""
	I0416 17:33:32.864685   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.864695   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:32.864702   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:32.864773   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:32.898849   52649 cri.go:89] found id: ""
	I0416 17:33:32.898875   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.898884   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:32.898889   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:32.898933   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:32.937760   52649 cri.go:89] found id: ""
	I0416 17:33:32.937784   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.937791   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:32.937796   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:32.937845   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:32.973180   52649 cri.go:89] found id: ""
	I0416 17:33:32.973208   52649 logs.go:276] 0 containers: []
	W0416 17:33:32.973219   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:32.973230   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:32.973243   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:33.025766   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:33.025794   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:33.040971   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:33.040999   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:33.111487   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:33.111513   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:33.111529   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:33.200877   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:33.200908   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:35.742316   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:35.757977   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:35.758048   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:35.793228   52649 cri.go:89] found id: ""
	I0416 17:33:35.793254   52649 logs.go:276] 0 containers: []
	W0416 17:33:35.793278   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:35.793286   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:35.793347   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:35.829467   52649 cri.go:89] found id: ""
	I0416 17:33:35.829498   52649 logs.go:276] 0 containers: []
	W0416 17:33:35.829508   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:35.829515   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:35.829565   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:35.864906   52649 cri.go:89] found id: ""
	I0416 17:33:35.864933   52649 logs.go:276] 0 containers: []
	W0416 17:33:35.864943   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:35.864949   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:35.865016   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:35.903584   52649 cri.go:89] found id: ""
	I0416 17:33:35.903608   52649 logs.go:276] 0 containers: []
	W0416 17:33:35.903617   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:35.903624   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:35.903685   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:35.939919   52649 cri.go:89] found id: ""
	I0416 17:33:35.939945   52649 logs.go:276] 0 containers: []
	W0416 17:33:35.939952   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:35.939957   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:35.940002   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:35.974739   52649 cri.go:89] found id: ""
	I0416 17:33:35.974768   52649 logs.go:276] 0 containers: []
	W0416 17:33:35.974777   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:35.974782   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:35.974841   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:36.010685   52649 cri.go:89] found id: ""
	I0416 17:33:36.010711   52649 logs.go:276] 0 containers: []
	W0416 17:33:36.010722   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:36.010730   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:36.010790   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:36.048296   52649 cri.go:89] found id: ""
	I0416 17:33:36.048322   52649 logs.go:276] 0 containers: []
	W0416 17:33:36.048332   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:36.048342   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:36.048363   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:36.116207   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:36.116223   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:36.116235   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:36.195721   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:36.195746   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:36.238380   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:36.238412   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:36.288352   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:36.288377   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:38.803567   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:38.816932   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:38.816998   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:38.855062   52649 cri.go:89] found id: ""
	I0416 17:33:38.855087   52649 logs.go:276] 0 containers: []
	W0416 17:33:38.855098   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:38.855105   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:38.855164   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:38.904933   52649 cri.go:89] found id: ""
	I0416 17:33:38.904962   52649 logs.go:276] 0 containers: []
	W0416 17:33:38.904973   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:38.904986   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:38.905044   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:38.948771   52649 cri.go:89] found id: ""
	I0416 17:33:38.948807   52649 logs.go:276] 0 containers: []
	W0416 17:33:38.948818   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:38.948827   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:38.948906   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:38.985352   52649 cri.go:89] found id: ""
	I0416 17:33:38.985374   52649 logs.go:276] 0 containers: []
	W0416 17:33:38.985381   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:38.985387   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:38.985429   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:39.022671   52649 cri.go:89] found id: ""
	I0416 17:33:39.022700   52649 logs.go:276] 0 containers: []
	W0416 17:33:39.022708   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:39.022714   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:39.022821   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:39.061671   52649 cri.go:89] found id: ""
	I0416 17:33:39.061698   52649 logs.go:276] 0 containers: []
	W0416 17:33:39.061709   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:39.061717   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:39.061785   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:39.100282   52649 cri.go:89] found id: ""
	I0416 17:33:39.100307   52649 logs.go:276] 0 containers: []
	W0416 17:33:39.100318   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:39.100326   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:39.100379   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:39.139212   52649 cri.go:89] found id: ""
	I0416 17:33:39.139246   52649 logs.go:276] 0 containers: []
	W0416 17:33:39.139256   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:39.139266   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:39.139279   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:39.193748   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:39.193772   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:39.212486   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:39.212515   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:39.299033   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:39.299059   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:39.299076   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:39.391341   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:39.391373   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:41.936952   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:41.953829   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:41.953907   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:42.002048   52649 cri.go:89] found id: ""
	I0416 17:33:42.002077   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.002088   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:42.002095   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:42.002161   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:42.046001   52649 cri.go:89] found id: ""
	I0416 17:33:42.046031   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.046042   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:42.046049   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:42.046113   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:42.081667   52649 cri.go:89] found id: ""
	I0416 17:33:42.081698   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.081707   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:42.081712   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:42.081771   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:42.126077   52649 cri.go:89] found id: ""
	I0416 17:33:42.126106   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.126117   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:42.126125   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:42.126182   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:42.161543   52649 cri.go:89] found id: ""
	I0416 17:33:42.161566   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.161574   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:42.161579   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:42.161630   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:42.202653   52649 cri.go:89] found id: ""
	I0416 17:33:42.202683   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.202695   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:42.202703   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:42.202769   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:42.240019   52649 cri.go:89] found id: ""
	I0416 17:33:42.240045   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.240052   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:42.240057   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:42.240113   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:42.277336   52649 cri.go:89] found id: ""
	I0416 17:33:42.277355   52649 logs.go:276] 0 containers: []
	W0416 17:33:42.277363   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:42.277371   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:42.277381   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:42.330875   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:42.330908   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:42.345829   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:42.345852   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:42.419802   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:42.419820   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:42.419831   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:42.501026   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:42.501066   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:45.045384   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:45.059821   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:45.059893   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:45.096829   52649 cri.go:89] found id: ""
	I0416 17:33:45.096872   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.096883   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:45.096890   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:45.096956   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:45.132813   52649 cri.go:89] found id: ""
	I0416 17:33:45.132849   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.132860   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:45.132867   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:45.132930   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:45.177280   52649 cri.go:89] found id: ""
	I0416 17:33:45.177311   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.177322   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:45.177329   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:45.177392   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:45.214072   52649 cri.go:89] found id: ""
	I0416 17:33:45.214098   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.214109   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:45.214116   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:45.214173   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:45.252272   52649 cri.go:89] found id: ""
	I0416 17:33:45.252297   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.252305   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:45.252311   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:45.252359   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:45.292483   52649 cri.go:89] found id: ""
	I0416 17:33:45.292509   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.292516   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:45.292521   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:45.292572   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:45.330266   52649 cri.go:89] found id: ""
	I0416 17:33:45.330297   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.330307   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:45.330314   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:45.330374   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:45.366836   52649 cri.go:89] found id: ""
	I0416 17:33:45.366863   52649 logs.go:276] 0 containers: []
	W0416 17:33:45.366874   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:45.366885   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:45.366900   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:45.442736   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:45.442760   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:45.442772   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:45.519262   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:45.519295   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:45.558584   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:45.558607   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:45.612988   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:45.613017   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:48.127554   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:48.141962   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:48.142016   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:48.180267   52649 cri.go:89] found id: ""
	I0416 17:33:48.180287   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.180295   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:48.180300   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:48.180352   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:48.214629   52649 cri.go:89] found id: ""
	I0416 17:33:48.214648   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.214653   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:48.214658   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:48.214708   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:48.253518   52649 cri.go:89] found id: ""
	I0416 17:33:48.253544   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.253555   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:48.253562   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:48.253611   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:48.286839   52649 cri.go:89] found id: ""
	I0416 17:33:48.286860   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.286868   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:48.286873   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:48.286915   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:48.322404   52649 cri.go:89] found id: ""
	I0416 17:33:48.322426   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.322433   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:48.322438   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:48.322481   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:48.381175   52649 cri.go:89] found id: ""
	I0416 17:33:48.381197   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.381203   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:48.381208   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:48.381254   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:48.421562   52649 cri.go:89] found id: ""
	I0416 17:33:48.421584   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.421594   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:48.421601   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:48.421656   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:48.458803   52649 cri.go:89] found id: ""
	I0416 17:33:48.458832   52649 logs.go:276] 0 containers: []
	W0416 17:33:48.458840   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:48.458850   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:48.458865   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:48.473056   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:48.473078   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:48.542768   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:48.542794   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:48.542806   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:48.617557   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:48.617584   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:48.658190   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:48.658213   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:51.211856   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:51.226435   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:51.226503   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:51.265608   52649 cri.go:89] found id: ""
	I0416 17:33:51.265639   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.265650   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:51.265657   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:51.265716   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:51.307826   52649 cri.go:89] found id: ""
	I0416 17:33:51.307846   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.307854   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:51.307859   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:51.307913   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:51.355135   52649 cri.go:89] found id: ""
	I0416 17:33:51.355156   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.355164   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:51.355170   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:51.355223   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:51.399942   52649 cri.go:89] found id: ""
	I0416 17:33:51.399963   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.399974   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:51.399991   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:51.400038   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:51.450456   52649 cri.go:89] found id: ""
	I0416 17:33:51.450479   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.450488   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:51.450495   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:51.450543   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:51.487969   52649 cri.go:89] found id: ""
	I0416 17:33:51.488004   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.488016   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:51.488024   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:51.488081   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:51.525066   52649 cri.go:89] found id: ""
	I0416 17:33:51.525093   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.525101   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:51.525106   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:51.525154   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:51.561313   52649 cri.go:89] found id: ""
	I0416 17:33:51.561342   52649 logs.go:276] 0 containers: []
	W0416 17:33:51.561353   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:51.561363   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:51.561376   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:51.638973   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:51.639002   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:51.684346   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:51.684369   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:51.736195   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:51.736225   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:51.750330   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:51.750352   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:51.827200   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:54.327437   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:54.341640   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:54.341714   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:54.378609   52649 cri.go:89] found id: ""
	I0416 17:33:54.378642   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.378652   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:54.378660   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:54.378717   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:54.413465   52649 cri.go:89] found id: ""
	I0416 17:33:54.413488   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.413498   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:54.413505   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:54.413563   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:54.449167   52649 cri.go:89] found id: ""
	I0416 17:33:54.449194   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.449207   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:54.449213   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:54.449274   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:54.486375   52649 cri.go:89] found id: ""
	I0416 17:33:54.486404   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.486415   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:54.486422   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:54.486479   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:54.524270   52649 cri.go:89] found id: ""
	I0416 17:33:54.524295   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.524306   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:54.524320   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:54.524378   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:54.563548   52649 cri.go:89] found id: ""
	I0416 17:33:54.563574   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.563584   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:54.563591   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:54.563654   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:54.602237   52649 cri.go:89] found id: ""
	I0416 17:33:54.602268   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.602281   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:54.602290   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:54.602354   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:54.637682   52649 cri.go:89] found id: ""
	I0416 17:33:54.637710   52649 logs.go:276] 0 containers: []
	W0416 17:33:54.637722   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:54.637730   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:54.637741   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:54.691197   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:54.691229   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:54.707744   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:54.707775   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:54.813748   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:54.813783   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:54.813798   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:33:54.905475   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:54.905503   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:57.453335   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:33:57.468885   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:33:57.468952   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:33:57.507715   52649 cri.go:89] found id: ""
	I0416 17:33:57.507740   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.507752   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:33:57.507763   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:33:57.507818   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:33:57.543520   52649 cri.go:89] found id: ""
	I0416 17:33:57.543548   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.543557   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:33:57.543563   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:33:57.543617   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:33:57.581232   52649 cri.go:89] found id: ""
	I0416 17:33:57.581261   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.581272   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:33:57.581280   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:33:57.581335   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:33:57.619194   52649 cri.go:89] found id: ""
	I0416 17:33:57.619226   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.619235   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:33:57.619242   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:33:57.619302   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:33:57.655036   52649 cri.go:89] found id: ""
	I0416 17:33:57.655065   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.655075   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:33:57.655081   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:33:57.655128   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:33:57.692433   52649 cri.go:89] found id: ""
	I0416 17:33:57.692459   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.692470   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:33:57.692477   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:33:57.692535   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:33:57.731891   52649 cri.go:89] found id: ""
	I0416 17:33:57.731913   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.731920   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:33:57.731925   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:33:57.731978   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:33:57.771843   52649 cri.go:89] found id: ""
	I0416 17:33:57.771872   52649 logs.go:276] 0 containers: []
	W0416 17:33:57.771879   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:33:57.771887   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:33:57.771899   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:33:57.816899   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:33:57.816932   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:33:57.870705   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:33:57.870734   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:33:57.886284   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:33:57.886311   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:33:57.955439   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:33:57.955460   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:33:57.955471   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:00.535255   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:00.566413   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:34:00.566478   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:34:00.616197   52649 cri.go:89] found id: ""
	I0416 17:34:00.616231   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.616243   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:34:00.616250   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:34:00.616309   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:34:00.652399   52649 cri.go:89] found id: ""
	I0416 17:34:00.652423   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.652432   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:34:00.652440   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:34:00.652495   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:34:00.692574   52649 cri.go:89] found id: ""
	I0416 17:34:00.692602   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.692613   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:34:00.692620   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:34:00.692699   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:34:00.729167   52649 cri.go:89] found id: ""
	I0416 17:34:00.729195   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.729206   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:34:00.729214   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:34:00.729270   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:34:00.770250   52649 cri.go:89] found id: ""
	I0416 17:34:00.770277   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.770285   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:34:00.770290   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:34:00.770337   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:34:00.806025   52649 cri.go:89] found id: ""
	I0416 17:34:00.806059   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.806070   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:34:00.806077   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:34:00.806135   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:34:00.843248   52649 cri.go:89] found id: ""
	I0416 17:34:00.843281   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.843290   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:34:00.843296   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:34:00.843340   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:34:00.878904   52649 cri.go:89] found id: ""
	I0416 17:34:00.878932   52649 logs.go:276] 0 containers: []
	W0416 17:34:00.878946   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:34:00.878956   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:34:00.878971   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:34:00.927748   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:34:00.927781   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:34:00.942579   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:34:00.942602   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:34:01.013583   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:34:01.013607   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:34:01.013621   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:01.088188   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:34:01.088218   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:34:03.631067   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:03.644949   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:34:03.645006   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:34:03.684823   52649 cri.go:89] found id: ""
	I0416 17:34:03.684861   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.684882   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:34:03.684888   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:34:03.684933   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:34:03.721107   52649 cri.go:89] found id: ""
	I0416 17:34:03.721132   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.721142   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:34:03.721148   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:34:03.721195   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:34:03.757414   52649 cri.go:89] found id: ""
	I0416 17:34:03.757444   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.757453   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:34:03.757458   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:34:03.757503   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:34:03.795475   52649 cri.go:89] found id: ""
	I0416 17:34:03.795498   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.795509   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:34:03.795515   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:34:03.795574   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:34:03.829510   52649 cri.go:89] found id: ""
	I0416 17:34:03.829533   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.829540   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:34:03.829545   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:34:03.829586   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:34:03.864585   52649 cri.go:89] found id: ""
	I0416 17:34:03.864613   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.864623   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:34:03.864629   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:34:03.864670   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:34:03.903691   52649 cri.go:89] found id: ""
	I0416 17:34:03.903715   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.903721   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:34:03.903726   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:34:03.903772   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:34:03.942005   52649 cri.go:89] found id: ""
	I0416 17:34:03.942030   52649 logs.go:276] 0 containers: []
	W0416 17:34:03.942038   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:34:03.942045   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:34:03.942056   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:34:03.956333   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:34:03.956355   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:34:04.026694   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:34:04.026712   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:34:04.026724   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:04.102890   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:34:04.102921   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:34:04.143697   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:34:04.143722   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:34:06.696659   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:06.711415   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:34:06.711464   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:34:06.750696   52649 cri.go:89] found id: ""
	I0416 17:34:06.750739   52649 logs.go:276] 0 containers: []
	W0416 17:34:06.750751   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:34:06.750759   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:34:06.750816   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:34:06.788284   52649 cri.go:89] found id: ""
	I0416 17:34:06.788312   52649 logs.go:276] 0 containers: []
	W0416 17:34:06.788324   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:34:06.788331   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:34:06.788383   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:34:06.825878   52649 cri.go:89] found id: ""
	I0416 17:34:06.825906   52649 logs.go:276] 0 containers: []
	W0416 17:34:06.825918   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:34:06.825926   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:34:06.825990   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:34:06.865132   52649 cri.go:89] found id: ""
	I0416 17:34:06.865157   52649 logs.go:276] 0 containers: []
	W0416 17:34:06.865168   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:34:06.865175   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:34:06.865239   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:34:06.902237   52649 cri.go:89] found id: ""
	I0416 17:34:06.902262   52649 logs.go:276] 0 containers: []
	W0416 17:34:06.902272   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:34:06.902279   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:34:06.902340   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:34:06.940363   52649 cri.go:89] found id: ""
	I0416 17:34:06.940391   52649 logs.go:276] 0 containers: []
	W0416 17:34:06.940402   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:34:06.940410   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:34:06.940468   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:34:06.976638   52649 cri.go:89] found id: ""
	I0416 17:34:06.976666   52649 logs.go:276] 0 containers: []
	W0416 17:34:06.976676   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:34:06.976682   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:34:06.976729   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:34:07.016805   52649 cri.go:89] found id: ""
	I0416 17:34:07.016825   52649 logs.go:276] 0 containers: []
	W0416 17:34:07.016845   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:34:07.016852   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:34:07.016866   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:34:07.068619   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:34:07.068650   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:34:07.083444   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:34:07.083471   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:34:07.160935   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:34:07.160954   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:34:07.160969   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:07.245839   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:34:07.245869   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:34:09.785426   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:09.800199   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:34:09.800270   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:34:09.840863   52649 cri.go:89] found id: ""
	I0416 17:34:09.840889   52649 logs.go:276] 0 containers: []
	W0416 17:34:09.840899   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:34:09.840905   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:34:09.840960   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:34:09.878797   52649 cri.go:89] found id: ""
	I0416 17:34:09.878829   52649 logs.go:276] 0 containers: []
	W0416 17:34:09.878837   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:34:09.878842   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:34:09.878898   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:34:09.914893   52649 cri.go:89] found id: ""
	I0416 17:34:09.914915   52649 logs.go:276] 0 containers: []
	W0416 17:34:09.914924   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:34:09.914931   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:34:09.914990   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:34:09.949509   52649 cri.go:89] found id: ""
	I0416 17:34:09.949531   52649 logs.go:276] 0 containers: []
	W0416 17:34:09.949540   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:34:09.949547   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:34:09.949600   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:34:09.990234   52649 cri.go:89] found id: ""
	I0416 17:34:09.990254   52649 logs.go:276] 0 containers: []
	W0416 17:34:09.990264   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:34:09.990270   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:34:09.990322   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:34:10.026557   52649 cri.go:89] found id: ""
	I0416 17:34:10.026585   52649 logs.go:276] 0 containers: []
	W0416 17:34:10.026594   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:34:10.026599   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:34:10.026659   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:34:10.062057   52649 cri.go:89] found id: ""
	I0416 17:34:10.062080   52649 logs.go:276] 0 containers: []
	W0416 17:34:10.062087   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:34:10.062092   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:34:10.062142   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:34:10.098572   52649 cri.go:89] found id: ""
	I0416 17:34:10.098595   52649 logs.go:276] 0 containers: []
	W0416 17:34:10.098602   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:34:10.098610   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:34:10.098624   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:34:10.162253   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:34:10.162284   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:34:10.176891   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:34:10.176913   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:34:10.247979   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:34:10.248000   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:34:10.248011   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:10.324024   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:34:10.324055   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:34:12.868680   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:12.883229   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:34:12.883278   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:34:12.924178   52649 cri.go:89] found id: ""
	I0416 17:34:12.924200   52649 logs.go:276] 0 containers: []
	W0416 17:34:12.924208   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:34:12.924214   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:34:12.924260   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:34:12.961227   52649 cri.go:89] found id: ""
	I0416 17:34:12.961252   52649 logs.go:276] 0 containers: []
	W0416 17:34:12.961260   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:34:12.961266   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:34:12.961318   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:34:12.996421   52649 cri.go:89] found id: ""
	I0416 17:34:12.996441   52649 logs.go:276] 0 containers: []
	W0416 17:34:12.996449   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:34:12.996454   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:34:12.996508   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:34:13.034208   52649 cri.go:89] found id: ""
	I0416 17:34:13.034229   52649 logs.go:276] 0 containers: []
	W0416 17:34:13.034240   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:34:13.034245   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:34:13.034287   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:34:13.072854   52649 cri.go:89] found id: ""
	I0416 17:34:13.072878   52649 logs.go:276] 0 containers: []
	W0416 17:34:13.072886   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:34:13.072891   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:34:13.072949   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:34:13.107390   52649 cri.go:89] found id: ""
	I0416 17:34:13.107414   52649 logs.go:276] 0 containers: []
	W0416 17:34:13.107424   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:34:13.107431   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:34:13.107476   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:34:13.143650   52649 cri.go:89] found id: ""
	I0416 17:34:13.143670   52649 logs.go:276] 0 containers: []
	W0416 17:34:13.143680   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:34:13.143685   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:34:13.143728   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:34:13.183017   52649 cri.go:89] found id: ""
	I0416 17:34:13.183038   52649 logs.go:276] 0 containers: []
	W0416 17:34:13.183045   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:34:13.183052   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:34:13.183067   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:34:13.237118   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:34:13.237146   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:34:13.251705   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:34:13.251731   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:34:13.323318   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:34:13.323333   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:34:13.323345   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:13.400593   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:34:13.400628   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:34:15.948207   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:15.964247   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:34:15.964325   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:34:16.000205   52649 cri.go:89] found id: ""
	I0416 17:34:16.000234   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.000246   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:34:16.000256   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:34:16.000338   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:34:16.036162   52649 cri.go:89] found id: ""
	I0416 17:34:16.036187   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.036194   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:34:16.036200   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:34:16.036249   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:34:16.073649   52649 cri.go:89] found id: ""
	I0416 17:34:16.073670   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.073680   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:34:16.073686   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:34:16.073729   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:34:16.110183   52649 cri.go:89] found id: ""
	I0416 17:34:16.110210   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.110220   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:34:16.110227   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:34:16.110280   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:34:16.146816   52649 cri.go:89] found id: ""
	I0416 17:34:16.146836   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.146843   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:34:16.146848   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:34:16.146902   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:34:16.186359   52649 cri.go:89] found id: ""
	I0416 17:34:16.186385   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.186395   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:34:16.186402   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:34:16.186448   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:34:16.223616   52649 cri.go:89] found id: ""
	I0416 17:34:16.223636   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.223643   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:34:16.223648   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:34:16.223694   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:34:16.260603   52649 cri.go:89] found id: ""
	I0416 17:34:16.260632   52649 logs.go:276] 0 containers: []
	W0416 17:34:16.260641   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:34:16.260652   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:34:16.260668   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:34:16.340620   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:34:16.340638   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:34:16.340650   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:16.417702   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:34:16.417734   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:34:16.457877   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:34:16.457913   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:34:16.510820   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:34:16.510847   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:34:19.027607   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:19.042232   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:34:19.042298   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:34:19.083817   52649 cri.go:89] found id: ""
	I0416 17:34:19.083853   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.083863   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:34:19.083870   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:34:19.083927   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:34:19.121992   52649 cri.go:89] found id: ""
	I0416 17:34:19.122020   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.122030   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:34:19.122037   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:34:19.122100   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:34:19.160677   52649 cri.go:89] found id: ""
	I0416 17:34:19.160699   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.160709   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:34:19.160715   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:34:19.160772   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:34:19.196963   52649 cri.go:89] found id: ""
	I0416 17:34:19.196990   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.197003   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:34:19.197008   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:34:19.197051   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:34:19.234731   52649 cri.go:89] found id: ""
	I0416 17:34:19.234756   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.234764   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:34:19.234772   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:34:19.234833   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:34:19.271883   52649 cri.go:89] found id: ""
	I0416 17:34:19.271920   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.271932   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:34:19.271940   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:34:19.271993   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:34:19.308784   52649 cri.go:89] found id: ""
	I0416 17:34:19.308814   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.308833   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:34:19.308851   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:34:19.308911   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:34:19.344001   52649 cri.go:89] found id: ""
	I0416 17:34:19.344031   52649 logs.go:276] 0 containers: []
	W0416 17:34:19.344042   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:34:19.344052   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:34:19.344064   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:34:19.399496   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:34:19.399526   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:34:19.413478   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:34:19.413502   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:34:19.485331   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:34:19.485352   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:34:19.485367   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:34:19.567478   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:34:19.567507   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 17:34:22.111175   52649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:34:22.126384   52649 kubeadm.go:591] duration metric: took 4m4.163348118s to restartPrimaryControlPlane
	W0416 17:34:22.126457   52649 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 17:34:22.126483   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 17:34:23.121469   52649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:34:23.137054   52649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:34:23.148681   52649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:34:23.159776   52649 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:34:23.159793   52649 kubeadm.go:156] found existing configuration files:
	
	I0416 17:34:23.159825   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:34:23.170036   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:34:23.170083   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:34:23.180959   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:34:23.191366   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:34:23.191456   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:34:23.202408   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:34:23.212717   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:34:23.212757   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:34:23.223355   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:34:23.233713   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:34:23.233750   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:34:23.244339   52649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:34:23.329369   52649 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:34:23.329431   52649 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:34:23.482646   52649 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:34:23.482794   52649 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:34:23.482922   52649 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:34:23.673466   52649 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:34:23.675612   52649 out.go:204]   - Generating certificates and keys ...
	I0416 17:34:23.675707   52649 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:34:23.675789   52649 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:34:23.675905   52649 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 17:34:23.675989   52649 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 17:34:23.676075   52649 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 17:34:23.676164   52649 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 17:34:23.676251   52649 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 17:34:23.676358   52649 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 17:34:23.676489   52649 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 17:34:23.676679   52649 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 17:34:23.676744   52649 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 17:34:23.676832   52649 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:34:23.748516   52649 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:34:23.914646   52649 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:34:24.051613   52649 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:34:24.277019   52649 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:34:24.292320   52649 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:34:24.293094   52649 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:34:24.293203   52649 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:34:24.442286   52649 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:34:24.444320   52649 out.go:204]   - Booting up control plane ...
	I0416 17:34:24.444453   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:34:24.448295   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:34:24.449918   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:34:24.450641   52649 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:34:24.454123   52649 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:35:04.455516   52649 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:35:04.456220   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:04.456422   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:09.456927   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:09.457123   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:19.457533   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:19.457775   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:35:39.458398   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:35:39.458669   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:36:19.460666   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:36:19.460937   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:36:19.460954   52649 kubeadm.go:309] 
	I0416 17:36:19.461019   52649 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:36:19.461086   52649 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:36:19.461101   52649 kubeadm.go:309] 
	I0416 17:36:19.461155   52649 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:36:19.461203   52649 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:36:19.461331   52649 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:36:19.461345   52649 kubeadm.go:309] 
	I0416 17:36:19.461495   52649 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:36:19.461533   52649 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:36:19.461565   52649 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:36:19.461572   52649 kubeadm.go:309] 
	I0416 17:36:19.461654   52649 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:36:19.461731   52649 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:36:19.461743   52649 kubeadm.go:309] 
	I0416 17:36:19.461850   52649 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:36:19.461931   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:36:19.462037   52649 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:36:19.462149   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 17:36:19.462162   52649 kubeadm.go:309] 
	I0416 17:36:19.463298   52649 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:36:19.463400   52649 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:36:19.463492   52649 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 17:36:19.463638   52649 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 17:36:19.463711   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 17:36:24.881759   52649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.418020564s)
	I0416 17:36:24.881848   52649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:36:24.898302   52649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:36:24.910361   52649 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:36:24.910385   52649 kubeadm.go:156] found existing configuration files:
	
	I0416 17:36:24.910438   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:36:24.924199   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:36:24.924271   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:36:24.936755   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:36:24.948475   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:36:24.948542   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:36:24.959963   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:36:24.970772   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:36:24.970834   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:36:24.982067   52649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:36:24.992900   52649 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:36:24.992960   52649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:36:25.004524   52649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:36:25.247373   52649 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:38:21.456413   52649 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:38:21.456505   52649 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 17:38:21.458335   52649 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:38:21.458412   52649 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:38:21.458508   52649 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:38:21.458643   52649 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:38:21.458785   52649 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:38:21.458894   52649 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:38:21.460865   52649 out.go:204]   - Generating certificates and keys ...
	I0416 17:38:21.460958   52649 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:38:21.461049   52649 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:38:21.461155   52649 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 17:38:21.461246   52649 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 17:38:21.461344   52649 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 17:38:21.461405   52649 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 17:38:21.461459   52649 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 17:38:21.461510   52649 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 17:38:21.461577   52649 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 17:38:21.461655   52649 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 17:38:21.461693   52649 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 17:38:21.461742   52649 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:38:21.461785   52649 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:38:21.461863   52649 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:38:21.461929   52649 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:38:21.462002   52649 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:38:21.462136   52649 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:38:21.462265   52649 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:38:21.462335   52649 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:38:21.462420   52649 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:38:21.463927   52649 out.go:204]   - Booting up control plane ...
	I0416 17:38:21.464008   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:38:21.464082   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:38:21.464158   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:38:21.464243   52649 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:38:21.464465   52649 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:38:21.464563   52649 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:38:21.464669   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.464832   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.464919   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465080   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465137   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465369   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465440   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465617   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465696   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465892   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465907   52649 kubeadm.go:309] 
	I0416 17:38:21.465940   52649 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:38:21.465975   52649 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:38:21.465982   52649 kubeadm.go:309] 
	I0416 17:38:21.466011   52649 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:38:21.466040   52649 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:38:21.466153   52649 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:38:21.466164   52649 kubeadm.go:309] 
	I0416 17:38:21.466251   52649 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:38:21.466289   52649 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:38:21.466329   52649 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:38:21.466340   52649 kubeadm.go:309] 
	I0416 17:38:21.466452   52649 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:38:21.466521   52649 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:38:21.466529   52649 kubeadm.go:309] 
	I0416 17:38:21.466622   52649 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:38:21.466695   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:38:21.466765   52649 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:38:21.466830   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 17:38:21.466852   52649 kubeadm.go:309] 
	I0416 17:38:21.466885   52649 kubeadm.go:393] duration metric: took 8m3.560726976s to StartCluster
	I0416 17:38:21.466921   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:38:21.466981   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:38:21.517447   52649 cri.go:89] found id: ""
	I0416 17:38:21.517474   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.517485   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:38:21.517493   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:38:21.517556   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:38:21.558224   52649 cri.go:89] found id: ""
	I0416 17:38:21.558250   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.558260   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:38:21.558267   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:38:21.558326   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:38:21.608680   52649 cri.go:89] found id: ""
	I0416 17:38:21.608712   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.608727   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:38:21.608735   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:38:21.608786   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:38:21.648819   52649 cri.go:89] found id: ""
	I0416 17:38:21.648860   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.648867   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:38:21.648873   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:38:21.648917   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:38:21.689263   52649 cri.go:89] found id: ""
	I0416 17:38:21.689300   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.689310   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:38:21.689317   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:38:21.689374   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:38:21.729665   52649 cri.go:89] found id: ""
	I0416 17:38:21.729694   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.729703   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:38:21.729709   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:38:21.729755   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:38:21.768070   52649 cri.go:89] found id: ""
	I0416 17:38:21.768096   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.768103   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:38:21.768109   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:38:21.768158   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:38:21.803401   52649 cri.go:89] found id: ""
	I0416 17:38:21.803425   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.803435   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:38:21.803446   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:38:21.803461   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:38:21.859787   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:38:21.859820   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:38:21.874861   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:38:21.874887   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:38:21.962673   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:38:21.962700   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:38:21.962713   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:38:22.072141   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:38:22.072172   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0416 17:38:22.120555   52649 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 17:38:22.120603   52649 out.go:239] * 
	* 
	W0416 17:38:22.120651   52649 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:38:22.120675   52649 out.go:239] * 
	* 
	W0416 17:38:22.121636   52649 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:38:22.125185   52649 out.go:177] 
	W0416 17:38:22.126349   52649 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:38:22.126406   52649 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 17:38:22.126429   52649 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 17:38:22.127951   52649 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-795352 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (240.559676ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-795352 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:26 UTC | 16 Apr 24 17:28 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| ssh     | cert-options-303502 ssh                                | cert-options-303502          | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:27 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |                |                     |                     |
	| ssh     | -p cert-options-303502 -- sudo                         | cert-options-303502          | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:27 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |                |                     |                     |
	| delete  | -p cert-options-303502                                 | cert-options-303502          | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:27 UTC |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC | 16 Apr 24 17:28 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-795352        | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-368813             | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC | 16 Apr 24 17:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512869            | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC | 16 Apr 24 17:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-795352             | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:37:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:37:04.764200   55388 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:37:04.764318   55388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:37:04.764328   55388 out.go:304] Setting ErrFile to fd 2...
	I0416 17:37:04.764333   55388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:37:04.764518   55388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:37:04.765077   55388 out.go:298] Setting JSON to false
	I0416 17:37:04.765938   55388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4777,"bootTime":1713284248,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:37:04.765996   55388 start.go:139] virtualization: kvm guest
	I0416 17:37:04.768061   55388 out.go:177] * [kubernetes-upgrade-633875] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:37:04.769412   55388 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:37:04.769409   55388 notify.go:220] Checking for updates...
	I0416 17:37:04.770894   55388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:37:04.772099   55388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:37:04.773370   55388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:37:04.774743   55388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:37:04.776092   55388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:37:04.777659   55388 config.go:182] Loaded profile config "kubernetes-upgrade-633875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:04.778033   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:04.778075   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:04.792607   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0416 17:37:04.793124   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:04.793717   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:37:04.793739   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:04.794049   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:04.794231   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:04.794500   55388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:37:04.794759   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:04.794791   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:04.808862   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37649
	I0416 17:37:04.809234   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:04.809675   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:37:04.809703   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:04.810062   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:04.810254   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:04.846580   55388 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:37:04.847941   55388 start.go:297] selected driver: kvm2
	I0416 17:37:04.847953   55388 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-633875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:37:04.848068   55388 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:37:04.848852   55388 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:37:04.848933   55388 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:37:04.863094   55388 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:37:04.863631   55388 cni.go:84] Creating CNI manager for ""
	I0416 17:37:04.863655   55388 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:37:04.863706   55388 start.go:340] cluster config:
	{Name:kubernetes-upgrade-633875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-633875 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:37:04.863864   55388 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:37:04.865575   55388 out.go:177] * Starting "kubernetes-upgrade-633875" primary control-plane node in "kubernetes-upgrade-633875" cluster
	I0416 17:37:00.567076   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:03.069630   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:04.866891   55388 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 17:37:04.866923   55388 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0416 17:37:04.866945   55388 cache.go:56] Caching tarball of preloaded images
	I0416 17:37:04.867026   55388 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:37:04.867040   55388 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on crio
	I0416 17:37:04.867151   55388 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kubernetes-upgrade-633875/config.json ...
	I0416 17:37:04.867374   55388 start.go:360] acquireMachinesLock for kubernetes-upgrade-633875: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:37:06.350191   55388 start.go:364] duration metric: took 1.482788883s to acquireMachinesLock for "kubernetes-upgrade-633875"
	I0416 17:37:06.350255   55388 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:37:06.350276   55388 fix.go:54] fixHost starting: 
	I0416 17:37:06.350668   55388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:06.350717   55388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:06.367553   55388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 17:37:06.368203   55388 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:06.369878   55388 main.go:141] libmachine: Using API Version  1
	I0416 17:37:06.369907   55388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:06.370277   55388 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:06.370464   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:06.370618   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetState
	I0416 17:37:06.372099   55388 fix.go:112] recreateIfNeeded on kubernetes-upgrade-633875: state=Running err=<nil>
	W0416 17:37:06.372128   55388 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:37:06.374023   55388 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-633875" VM ...
	I0416 17:37:04.889394   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.889918   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has current primary IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.889949   53724 main.go:141] libmachine: (no-preload-368813) Found IP for machine: 192.168.72.33
	I0416 17:37:04.889958   53724 main.go:141] libmachine: (no-preload-368813) Reserving static IP address...
	I0416 17:37:04.890418   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "no-preload-368813", mac: "52:54:00:f7:61:eb", ip: "192.168.72.33"} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:04.890447   53724 main.go:141] libmachine: (no-preload-368813) DBG | skip adding static IP to network mk-no-preload-368813 - found existing host DHCP lease matching {name: "no-preload-368813", mac: "52:54:00:f7:61:eb", ip: "192.168.72.33"}
	I0416 17:37:04.890464   53724 main.go:141] libmachine: (no-preload-368813) Reserved static IP address: 192.168.72.33
	I0416 17:37:04.890477   53724 main.go:141] libmachine: (no-preload-368813) Waiting for SSH to be available...
	I0416 17:37:04.890490   53724 main.go:141] libmachine: (no-preload-368813) DBG | Getting to WaitForSSH function...
	I0416 17:37:04.892931   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.893315   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:04.893340   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:04.893490   53724 main.go:141] libmachine: (no-preload-368813) DBG | Using SSH client type: external
	I0416 17:37:04.893514   53724 main.go:141] libmachine: (no-preload-368813) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa (-rw-------)
	I0416 17:37:04.893543   53724 main.go:141] libmachine: (no-preload-368813) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:37:04.893563   53724 main.go:141] libmachine: (no-preload-368813) DBG | About to run SSH command:
	I0416 17:37:04.893578   53724 main.go:141] libmachine: (no-preload-368813) DBG | exit 0
	I0416 17:37:05.021762   53724 main.go:141] libmachine: (no-preload-368813) DBG | SSH cmd err, output: <nil>: 
	I0416 17:37:05.022093   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetConfigRaw
	I0416 17:37:05.022855   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:05.025557   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.025925   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.025958   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.026136   53724 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/config.json ...
	I0416 17:37:05.026308   53724 machine.go:94] provisionDockerMachine start ...
	I0416 17:37:05.026325   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:05.026619   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.028932   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.029318   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.029354   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.029446   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.029637   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.029782   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.029933   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.030105   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.030305   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.030321   53724 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:37:05.150085   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:37:05.150126   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetMachineName
	I0416 17:37:05.150422   53724 buildroot.go:166] provisioning hostname "no-preload-368813"
	I0416 17:37:05.150454   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetMachineName
	I0416 17:37:05.150643   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.153784   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.154147   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.154185   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.154326   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.154480   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.154661   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.154784   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.154960   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.155123   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.155135   53724 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-368813 && echo "no-preload-368813" | sudo tee /etc/hostname
	I0416 17:37:05.299556   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-368813
	
	I0416 17:37:05.299585   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.302432   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.302778   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.302804   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.302997   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.303223   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.303381   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.303510   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.303659   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.303870   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.303888   53724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-368813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-368813/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-368813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:37:05.431975   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:37:05.432002   53724 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:37:05.432030   53724 buildroot.go:174] setting up certificates
	I0416 17:37:05.432040   53724 provision.go:84] configureAuth start
	I0416 17:37:05.432048   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetMachineName
	I0416 17:37:05.432369   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:05.434863   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.435262   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.435292   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.435412   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.437642   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.437996   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.438040   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.438197   53724 provision.go:143] copyHostCerts
	I0416 17:37:05.438244   53724 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:37:05.438255   53724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:37:05.438306   53724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:37:05.438440   53724 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:37:05.438455   53724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:37:05.438490   53724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:37:05.438558   53724 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:37:05.438566   53724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:37:05.438585   53724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:37:05.438633   53724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.no-preload-368813 san=[127.0.0.1 192.168.72.33 localhost minikube no-preload-368813]
	I0416 17:37:05.579937   53724 provision.go:177] copyRemoteCerts
	I0416 17:37:05.579990   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:37:05.580013   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.582601   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.582920   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.582951   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.583075   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.583244   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.583386   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.583500   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:05.676952   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:37:05.705789   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:37:05.739072   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 17:37:05.770865   53724 provision.go:87] duration metric: took 338.815509ms to configureAuth
	I0416 17:37:05.770894   53724 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:37:05.771080   53724 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:05.771178   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:05.773993   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.774334   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:05.774363   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:05.774508   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:05.774723   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.774906   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:05.775066   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:05.775252   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:05.775455   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:05.775475   53724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:37:06.087339   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:37:06.087369   53724 machine.go:97] duration metric: took 1.061049558s to provisionDockerMachine
	I0416 17:37:06.087380   53724 start.go:293] postStartSetup for "no-preload-368813" (driver="kvm2")
	I0416 17:37:06.087391   53724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:37:06.087406   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.087718   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:37:06.087751   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.090496   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.090907   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.090940   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.091130   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.091301   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.091461   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.091606   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:06.183788   53724 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:37:06.188831   53724 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:37:06.188866   53724 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:37:06.188930   53724 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:37:06.189008   53724 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:37:06.189090   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:37:06.201361   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:37:06.229472   53724 start.go:296] duration metric: took 142.079309ms for postStartSetup
	I0416 17:37:06.229516   53724 fix.go:56] duration metric: took 19.74706223s for fixHost
	I0416 17:37:06.229540   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.232137   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.232482   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.232516   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.232682   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.232903   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.233082   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.233223   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.233412   53724 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.233650   53724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.33 22 <nil> <nil>}
	I0416 17:37:06.233663   53724 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:37:06.350010   53724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289026.321915296
	
	I0416 17:37:06.350036   53724 fix.go:216] guest clock: 1713289026.321915296
	I0416 17:37:06.350045   53724 fix.go:229] Guest: 2024-04-16 17:37:06.321915296 +0000 UTC Remote: 2024-04-16 17:37:06.229520511 +0000 UTC m=+336.716982241 (delta=92.394785ms)
	I0416 17:37:06.350086   53724 fix.go:200] guest clock delta is within tolerance: 92.394785ms
	I0416 17:37:06.350096   53724 start.go:83] releasing machines lock for "no-preload-368813", held for 19.867678127s
	I0416 17:37:06.350130   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.350445   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:06.353155   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.353565   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.353601   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.353712   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.354248   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.354441   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:06.354510   53724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:37:06.354558   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.354676   53724 ssh_runner.go:195] Run: cat /version.json
	I0416 17:37:06.354701   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:06.357402   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.357437   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.357726   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.357752   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.357849   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.357848   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:06.357874   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:06.358010   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.358120   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:06.358181   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.358259   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:06.358341   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:06.358428   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:06.358576   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:06.471357   53724 ssh_runner.go:195] Run: systemctl --version
	I0416 17:37:06.478216   53724 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:37:06.628508   53724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:37:06.637713   53724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:37:06.637786   53724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:37:06.662717   53724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:37:06.662741   53724 start.go:494] detecting cgroup driver to use...
	I0416 17:37:06.662806   53724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:37:06.685365   53724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:37:06.705771   53724 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:37:06.705857   53724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:37:06.723890   53724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:37:06.739861   53724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:37:06.866653   53724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:37:07.029166   53724 docker.go:233] disabling docker service ...
	I0416 17:37:07.029242   53724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:37:07.045705   53724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:37:07.060441   53724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:37:07.200010   53724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:37:07.341930   53724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:37:07.358423   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:37:07.381694   53724 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:37:07.381764   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.394648   53724 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:37:07.394714   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.408756   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.420986   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.434883   53724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:37:07.449279   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.463375   53724 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.484682   53724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:07.498345   53724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:37:07.510414   53724 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:37:07.510485   53724 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:37:07.526274   53724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:37:07.537928   53724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:07.687822   53724 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:37:07.851570   53724 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:37:07.851660   53724 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:37:07.857638   53724 start.go:562] Will wait 60s for crictl version
	I0416 17:37:07.857694   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:07.862026   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:37:07.911220   53724 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:37:07.911303   53724 ssh_runner.go:195] Run: crio --version
	I0416 17:37:07.942172   53724 ssh_runner.go:195] Run: crio --version
	I0416 17:37:07.987215   53724 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 17:37:07.988643   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetIP
	I0416 17:37:07.992015   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:07.992372   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:07.992412   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:07.992625   53724 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0416 17:37:07.997913   53724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:37:08.015198   53724 kubeadm.go:877] updating cluster {Name:no-preload-368813 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-368813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:37:08.015319   53724 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 17:37:08.015349   53724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:37:08.061694   53724 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 17:37:08.061724   53724 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 17:37:08.061791   53724 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.062005   53724 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.062135   53724 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.062258   53724 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.062373   53724 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.062529   53724 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 17:37:08.062671   53724 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.062788   53724 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.064021   53724 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.064250   53724 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.064478   53724 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.064501   53724 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.064635   53724 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.064686   53724 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.064705   53724 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.064646   53724 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 17:37:08.232497   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.236554   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.241828   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 17:37:08.245226   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.251937   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.269175   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.271121   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.325571   53724 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 17:37:08.325619   53724 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.325668   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.345391   53724 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.444138   53724 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 17:37:08.444190   53724 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.444242   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548066   53724 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 17:37:08.548103   53724 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 17:37:08.548115   53724 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.548130   53724 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.548161   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548163   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548207   53724 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 17:37:08.548241   53724 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 17:37:08.548248   53724 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.548269   53724 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.548288   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548306   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.548335   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 17:37:08.548373   53724 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 17:37:08.548398   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 17:37:08.548402   53724 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.548443   53724 ssh_runner.go:195] Run: which crictl
	I0416 17:37:08.615779   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 17:37:08.615810   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 17:37:08.615820   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 17:37:08.615859   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 17:37:08.615871   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:08.615783   53724 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 17:37:08.615897   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 17:37:08.615945   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 17:37:08.616042   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 17:37:08.748677   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:08.748786   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:08.748784   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:08.748958   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:08.749462   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 17:37:08.749524   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:08.749541   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 17:37:08.749547   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 17:37:08.749553   53724 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 17:37:08.749590   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 17:37:08.749596   53724 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 17:37:08.749591   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:08.749657   53724 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 17:37:08.749630   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 17:37:08.760550   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 17:37:08.761129   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 17:37:06.375230   55388 machine.go:94] provisionDockerMachine start ...
	I0416 17:37:06.375251   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:06.375442   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.377827   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.378205   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.378230   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.378391   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:06.378563   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.378729   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.378849   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:06.378986   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.379226   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:06.379241   55388 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:37:06.494024   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-633875
	
	I0416 17:37:06.494053   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:37:06.494319   55388 buildroot.go:166] provisioning hostname "kubernetes-upgrade-633875"
	I0416 17:37:06.494348   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:37:06.494524   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.497487   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.497892   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.497922   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.498052   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:06.498248   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.498408   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.498540   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:06.498751   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.498974   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:06.498991   55388 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-633875 && echo "kubernetes-upgrade-633875" | sudo tee /etc/hostname
	I0416 17:37:06.636590   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-633875
	
	I0416 17:37:06.636629   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.639776   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.640182   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.640212   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.640418   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:06.640591   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.640751   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:06.640932   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:06.641136   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:06.641301   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:06.641319   55388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-633875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-633875/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-633875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:37:06.767180   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:37:06.767207   55388 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:37:06.767249   55388 buildroot.go:174] setting up certificates
	I0416 17:37:06.767266   55388 provision.go:84] configureAuth start
	I0416 17:37:06.767291   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetMachineName
	I0416 17:37:06.767594   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:37:06.770532   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.770926   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.770976   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.771124   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:06.773394   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.773809   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:06.773836   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:06.774061   55388 provision.go:143] copyHostCerts
	I0416 17:37:06.774121   55388 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:37:06.774142   55388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:37:06.774210   55388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:37:06.774341   55388 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:37:06.774355   55388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:37:06.774387   55388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:37:06.774484   55388 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:37:06.774497   55388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:37:06.774530   55388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:37:06.774619   55388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-633875 san=[127.0.0.1 192.168.39.149 kubernetes-upgrade-633875 localhost minikube]
	I0416 17:37:07.210423   55388 provision.go:177] copyRemoteCerts
	I0416 17:37:07.210501   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:37:07.210530   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:07.213438   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.213842   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:07.213878   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.213972   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:07.214172   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:07.214359   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:07.214508   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:07.305628   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:37:07.334474   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0416 17:37:07.369595   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:37:07.403200   55388 provision.go:87] duration metric: took 635.902682ms to configureAuth
	I0416 17:37:07.403228   55388 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:37:07.403420   55388 config.go:182] Loaded profile config "kubernetes-upgrade-633875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:07.403510   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:07.406659   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.407098   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:07.407123   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:07.407325   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:07.407508   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:07.407712   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:07.407879   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:07.408051   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:07.408252   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:07.408270   55388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:37:08.476953   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:37:08.476978   55388 machine.go:97] duration metric: took 2.10173376s to provisionDockerMachine
	I0416 17:37:08.476990   55388 start.go:293] postStartSetup for "kubernetes-upgrade-633875" (driver="kvm2")
	I0416 17:37:08.477005   55388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:37:08.477023   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.477353   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:37:08.477390   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.480308   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.480674   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.480703   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.480878   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.481076   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.481276   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.481407   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:08.573233   55388 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:37:08.578701   55388 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:37:08.578730   55388 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:37:08.578800   55388 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:37:08.578909   55388 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:37:08.579046   55388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:37:08.594792   55388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:37:08.631328   55388 start.go:296] duration metric: took 154.326696ms for postStartSetup
	I0416 17:37:08.631361   55388 fix.go:56] duration metric: took 2.281095817s for fixHost
	I0416 17:37:08.631383   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.634352   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.634683   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.634712   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.635020   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.635245   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.635425   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.635628   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.635806   55388 main.go:141] libmachine: Using SSH client type: native
	I0416 17:37:08.636007   55388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I0416 17:37:08.636027   55388 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:37:08.755600   55388 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289028.702903253
	
	I0416 17:37:08.755624   55388 fix.go:216] guest clock: 1713289028.702903253
	I0416 17:37:08.755633   55388 fix.go:229] Guest: 2024-04-16 17:37:08.702903253 +0000 UTC Remote: 2024-04-16 17:37:08.631364556 +0000 UTC m=+3.913384729 (delta=71.538697ms)
	I0416 17:37:08.755661   55388 fix.go:200] guest clock delta is within tolerance: 71.538697ms
	I0416 17:37:08.755667   55388 start.go:83] releasing machines lock for "kubernetes-upgrade-633875", held for 2.405434774s
	I0416 17:37:08.755693   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.755971   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetIP
	I0416 17:37:08.759403   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.759848   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.759881   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.760046   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.760648   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.760857   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .DriverName
	I0416 17:37:08.760941   55388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:37:08.760976   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.761080   55388 ssh_runner.go:195] Run: cat /version.json
	I0416 17:37:08.761102   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHHostname
	I0416 17:37:08.764060   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764402   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764690   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.764719   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764782   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:a1:92", ip: ""} in network mk-kubernetes-upgrade-633875: {Iface:virbr1 ExpiryTime:2024-04-16 18:36:39 +0000 UTC Type:0 Mac:52:54:00:94:a1:92 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:kubernetes-upgrade-633875 Clientid:01:52:54:00:94:a1:92}
	I0416 17:37:08.764813   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) DBG | domain kubernetes-upgrade-633875 has defined IP address 192.168.39.149 and MAC address 52:54:00:94:a1:92 in network mk-kubernetes-upgrade-633875
	I0416 17:37:08.764998   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.765092   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHPort
	I0416 17:37:08.765299   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.765373   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHKeyPath
	I0416 17:37:08.765450   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.765567   55388 main.go:141] libmachine: (kubernetes-upgrade-633875) Calling .GetSSHUsername
	I0416 17:37:08.765638   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:08.765713   55388 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kubernetes-upgrade-633875/id_rsa Username:docker}
	I0416 17:37:08.922322   55388 ssh_runner.go:195] Run: systemctl --version
	I0416 17:37:08.949599   55388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:37:09.267176   55388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:37:09.305504   55388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:37:09.305574   55388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:37:09.338140   55388 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 17:37:09.338165   55388 start.go:494] detecting cgroup driver to use...
	I0416 17:37:09.338233   55388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:37:09.426636   55388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:37:09.474429   55388 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:37:09.474497   55388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:37:09.500206   55388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:37:09.518014   55388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:37:09.711725   55388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:37:05.564972   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:07.565601   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:09.569287   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:10.865114   53724 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.115431361s)
	I0416 17:37:10.865158   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 17:37:10.865276   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.115660537s)
	I0416 17:37:10.865308   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 17:37:10.865327   53724 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.115674106s)
	I0416 17:37:10.865354   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 17:37:10.865337   53724 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 17:37:10.865373   53724 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.115809755s)
	I0416 17:37:10.865391   53724 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 17:37:10.865409   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 17:37:09.920225   55388 docker.go:233] disabling docker service ...
	I0416 17:37:09.920299   55388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:37:09.950938   55388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:37:09.973589   55388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:37:10.185781   55388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:37:10.385437   55388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:37:10.403356   55388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:37:10.434851   55388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:37:10.434947   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.453473   55388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:37:10.453544   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.471529   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.488189   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.504551   55388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:37:10.522872   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.535463   55388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.549552   55388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:37:10.562459   55388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:37:10.574700   55388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:37:10.586226   55388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:10.756003   55388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:37:12.068056   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:14.565051   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:14.878395   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.012956596s)
	I0416 17:37:14.878427   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 17:37:14.878451   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:14.878497   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 17:37:16.947627   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.069101064s)
	I0416 17:37:16.947655   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 17:37:16.947682   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:16.947732   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 17:37:19.215393   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.267634517s)
	I0416 17:37:19.215430   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 17:37:19.215458   53724 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 17:37:19.215507   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 17:37:16.566813   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:19.064680   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:19.970020   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 17:37:19.970068   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:19.970123   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 17:37:22.424392   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.454240217s)
	I0416 17:37:22.424418   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 17:37:22.424446   53724 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 17:37:22.424505   53724 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 17:37:21.564890   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:23.566319   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:24.586584   53724 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.16205441s)
	I0416 17:37:24.586610   53724 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18649-3628/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 17:37:24.586641   53724 cache_images.go:123] Successfully loaded all cached images
	I0416 17:37:24.586647   53724 cache_images.go:92] duration metric: took 16.524908979s to LoadCachedImages
	I0416 17:37:24.586657   53724 kubeadm.go:928] updating node { 192.168.72.33 8443 v1.30.0-rc.2 crio true true} ...
	I0416 17:37:24.586774   53724 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-368813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-368813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:37:24.586854   53724 ssh_runner.go:195] Run: crio config
	I0416 17:37:24.645059   53724 cni.go:84] Creating CNI manager for ""
	I0416 17:37:24.645089   53724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:37:24.645103   53724 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:37:24.645132   53724 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.33 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-368813 NodeName:no-preload-368813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:37:24.645282   53724 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-368813"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:37:24.645344   53724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 17:37:24.659269   53724 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:37:24.659766   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:37:24.672455   53724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0416 17:37:24.693131   53724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 17:37:24.713433   53724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0416 17:37:24.734204   53724 ssh_runner.go:195] Run: grep 192.168.72.33	control-plane.minikube.internal$ /etc/hosts
	I0416 17:37:24.738626   53724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:37:24.752746   53724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:24.885615   53724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:37:24.904188   53724 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813 for IP: 192.168.72.33
	I0416 17:37:24.904208   53724 certs.go:194] generating shared ca certs ...
	I0416 17:37:24.904227   53724 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:37:24.904403   53724 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:37:24.904459   53724 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:37:24.904470   53724 certs.go:256] generating profile certs ...
	I0416 17:37:24.904575   53724 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.key
	I0416 17:37:24.904656   53724 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/apiserver.key.dde448ea
	I0416 17:37:24.904711   53724 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/proxy-client.key
	I0416 17:37:24.904874   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:37:24.904912   53724 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:37:24.904938   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:37:24.904980   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:37:24.905030   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:37:24.905062   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:37:24.905116   53724 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:37:24.905888   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:37:24.938183   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:37:24.966084   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:37:24.993879   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:37:25.027746   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 17:37:25.053149   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:37:25.089639   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:37:25.116547   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:37:25.141964   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:37:25.167574   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:37:25.193102   53724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:37:25.218836   53724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:37:25.237210   53724 ssh_runner.go:195] Run: openssl version
	I0416 17:37:25.243344   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:37:25.255714   53724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:37:25.260656   53724 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:37:25.260721   53724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:37:25.267057   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:37:25.279172   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:37:25.291391   53724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:37:25.296938   53724 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:37:25.296972   53724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:37:25.303026   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:37:25.315351   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:37:25.327627   53724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:37:25.332320   53724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:37:25.332355   53724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:37:25.338610   53724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:37:25.350961   53724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:37:25.356003   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:37:25.362451   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:37:25.368848   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:37:25.375257   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:37:25.381547   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:37:25.387670   53724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:37:25.393994   53724 kubeadm.go:391] StartCluster: {Name:no-preload-368813 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-368813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:37:25.394072   53724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:37:25.394104   53724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:37:25.438139   53724 cri.go:89] found id: ""
	I0416 17:37:25.438216   53724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 17:37:25.450096   53724 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 17:37:25.450114   53724 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 17:37:25.450119   53724 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 17:37:25.450162   53724 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 17:37:25.461706   53724 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:37:25.462998   53724 kubeconfig.go:125] found "no-preload-368813" server: "https://192.168.72.33:8443"
	I0416 17:37:25.465272   53724 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 17:37:25.476435   53724 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.33
	I0416 17:37:25.476462   53724 kubeadm.go:1154] stopping kube-system containers ...
	I0416 17:37:25.476471   53724 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 17:37:25.476511   53724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:37:25.518010   53724 cri.go:89] found id: ""
	I0416 17:37:25.518097   53724 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 17:37:25.536784   53724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:37:25.550182   53724 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:37:25.550198   53724 kubeadm.go:156] found existing configuration files:
	
	I0416 17:37:25.550265   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:37:25.562463   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:37:25.562514   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:37:25.575053   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:37:25.587142   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:37:25.587190   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:37:25.599571   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:37:25.611495   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:37:25.611534   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:37:25.623888   53724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:37:25.636118   53724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:37:25.636166   53724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:37:25.648781   53724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:37:25.661134   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:25.783423   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:26.746855   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:26.978330   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:27.075325   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:27.196663   53724 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:37:27.196746   53724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:37:27.696969   53724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:37:28.197025   53724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:37:28.281883   53724 api_server.go:72] duration metric: took 1.085219632s to wait for apiserver process to appear ...
	I0416 17:37:28.281914   53724 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:37:28.281955   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:26.065178   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:28.067229   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:31.430709   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:37:31.430738   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:37:31.430752   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:31.460238   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:37:31.460263   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:37:31.782156   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:31.786676   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 17:37:31.786708   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 17:37:32.282799   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:32.287374   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 17:37:32.287396   53724 api_server.go:103] status: https://192.168.72.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 17:37:32.783063   53724 api_server.go:253] Checking apiserver healthz at https://192.168.72.33:8443/healthz ...
	I0416 17:37:32.788958   53724 api_server.go:279] https://192.168.72.33:8443/healthz returned 200:
	ok
	I0416 17:37:32.801262   53724 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 17:37:32.801294   53724 api_server.go:131] duration metric: took 4.519371789s to wait for apiserver health ...
	I0416 17:37:32.801309   53724 cni.go:84] Creating CNI manager for ""
	I0416 17:37:32.801317   53724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:37:32.802960   53724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 17:37:32.804534   53724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 17:37:32.831035   53724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 17:37:32.865460   53724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:37:32.875725   53724 system_pods.go:59] 8 kube-system pods found
	I0416 17:37:32.875754   53724 system_pods.go:61] "coredns-7db6d8ff4d-69lpx" [b3b140b9-fe8c-4289-94d3-df5f8ee50485] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 17:37:32.875761   53724 system_pods.go:61] "etcd-no-preload-368813" [df27fe8b-1b49-444c-93a7-dbc4e9842cb2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 17:37:32.875768   53724 system_pods.go:61] "kube-apiserver-no-preload-368813" [0b4479c4-5c25-45b2-8ffc-4e974eb41a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 17:37:32.875773   53724 system_pods.go:61] "kube-controller-manager-no-preload-368813" [99df4534-f626-4a7f-9835-ca4935ce4a35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 17:37:32.875779   53724 system_pods.go:61] "kube-proxy-jtn9f" [b64c6a20-cc25-4ea9-9c41-8dac9f537332] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 17:37:32.875784   53724 system_pods.go:61] "kube-scheduler-no-preload-368813" [eccdb209-897b-4f20-ac38-506769602cc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 17:37:32.875788   53724 system_pods.go:61] "metrics-server-569cc877fc-tt8vp" [6c42b82b-7ff1-4f18-a387-a2c7b06adb63] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 17:37:32.875793   53724 system_pods.go:61] "storage-provisioner" [c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 17:37:32.875799   53724 system_pods.go:74] duration metric: took 10.321803ms to wait for pod list to return data ...
	I0416 17:37:32.875805   53724 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:37:32.879090   53724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:37:32.879117   53724 node_conditions.go:123] node cpu capacity is 2
	I0416 17:37:32.879133   53724 node_conditions.go:105] duration metric: took 3.322937ms to run NodePressure ...
	I0416 17:37:32.879152   53724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:37:33.168696   53724 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 17:37:33.174450   53724 kubeadm.go:733] kubelet initialised
	I0416 17:37:33.174470   53724 kubeadm.go:734] duration metric: took 5.749269ms waiting for restarted kubelet to initialise ...
	I0416 17:37:33.174476   53724 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:37:33.179502   53724 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.184350   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.184369   53724 pod_ready.go:81] duration metric: took 4.846155ms for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.184377   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.184383   53724 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.191851   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "etcd-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.191873   53724 pod_ready.go:81] duration metric: took 7.48224ms for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.191883   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "etcd-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.191891   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.196552   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-apiserver-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.196570   53724 pod_ready.go:81] duration metric: took 4.672597ms for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.196577   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-apiserver-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.196582   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.272397   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.272425   53724 pod_ready.go:81] duration metric: took 75.834666ms for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.272434   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.272440   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:33.669448   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-proxy-jtn9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.669478   53724 pod_ready.go:81] duration metric: took 397.031738ms for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:33.669486   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-proxy-jtn9f" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:33.669493   53724 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:34.069026   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "kube-scheduler-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.069052   53724 pod_ready.go:81] duration metric: took 399.552424ms for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:34.069061   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "kube-scheduler-no-preload-368813" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.069066   53724 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:34.469216   53724 pod_ready.go:97] node "no-preload-368813" hosting pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.469238   53724 pod_ready.go:81] duration metric: took 400.163808ms for pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace to be "Ready" ...
	E0416 17:37:34.469247   53724 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-368813" hosting pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:34.469254   53724 pod_ready.go:38] duration metric: took 1.294770407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:37:34.469271   53724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:37:34.482299   53724 ops.go:34] apiserver oom_adj: -16
	I0416 17:37:34.482324   53724 kubeadm.go:591] duration metric: took 9.032199177s to restartPrimaryControlPlane
	I0416 17:37:34.482334   53724 kubeadm.go:393] duration metric: took 9.088344142s to StartCluster
	I0416 17:37:34.482350   53724 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:37:34.482418   53724 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:37:34.484027   53724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:37:34.484259   53724 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.33 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:37:34.486190   53724 out.go:177] * Verifying Kubernetes components...
	I0416 17:37:34.484366   53724 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:37:34.484449   53724 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:37:34.487436   53724 addons.go:69] Setting default-storageclass=true in profile "no-preload-368813"
	I0416 17:37:34.487445   53724 addons.go:69] Setting metrics-server=true in profile "no-preload-368813"
	I0416 17:37:34.487452   53724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:37:34.487468   53724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-368813"
	I0416 17:37:34.487475   53724 addons.go:234] Setting addon metrics-server=true in "no-preload-368813"
	W0416 17:37:34.487483   53724 addons.go:243] addon metrics-server should already be in state true
	I0416 17:37:34.487506   53724 host.go:66] Checking if "no-preload-368813" exists ...
	I0416 17:37:34.487437   53724 addons.go:69] Setting storage-provisioner=true in profile "no-preload-368813"
	I0416 17:37:34.487541   53724 addons.go:234] Setting addon storage-provisioner=true in "no-preload-368813"
	W0416 17:37:34.487555   53724 addons.go:243] addon storage-provisioner should already be in state true
	I0416 17:37:34.487584   53724 host.go:66] Checking if "no-preload-368813" exists ...
	I0416 17:37:34.487823   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.487855   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.487867   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.487895   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.487951   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.487983   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.504274   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38635
	I0416 17:37:34.504426   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0416 17:37:34.504652   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.504883   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.505178   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.505207   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.505368   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.505390   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.505578   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.505720   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.505779   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.506261   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.506294   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.506850   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0416 17:37:34.507371   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.507842   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.507868   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.508214   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.508765   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.508814   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.509191   53724 addons.go:234] Setting addon default-storageclass=true in "no-preload-368813"
	W0416 17:37:34.509209   53724 addons.go:243] addon default-storageclass should already be in state true
	I0416 17:37:34.509236   53724 host.go:66] Checking if "no-preload-368813" exists ...
	I0416 17:37:34.509521   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.509555   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.522208   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0416 17:37:34.522634   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.523123   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.523151   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.523339   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0416 17:37:34.523492   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.523648   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.523706   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.524155   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.524184   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.524511   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.524690   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.525300   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:34.527243   53724 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 17:37:34.528539   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 17:37:34.528555   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 17:37:34.528573   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:34.526300   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:34.528313   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38987
	I0416 17:37:34.530050   53724 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:37:34.529061   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.531155   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.531489   53724 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:37:34.531513   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:37:34.531528   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:34.531581   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:34.531607   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.531737   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:34.531904   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:34.532051   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:34.532067   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.532087   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.532282   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:34.532454   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.533039   53724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:37:34.533083   53724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:37:34.534355   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.534689   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:34.534716   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.534868   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:34.535215   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:34.535355   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:34.535489   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:30.565630   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:32.566619   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:35.066221   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:34.580095   53724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0416 17:37:34.580488   53724 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:37:34.580956   53724 main.go:141] libmachine: Using API Version  1
	I0416 17:37:34.580981   53724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:37:34.581299   53724 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:37:34.581514   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetState
	I0416 17:37:34.582947   53724 main.go:141] libmachine: (no-preload-368813) Calling .DriverName
	I0416 17:37:34.583186   53724 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:37:34.583199   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:37:34.583211   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHHostname
	I0416 17:37:34.585917   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.586281   53724 main.go:141] libmachine: (no-preload-368813) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:61:eb", ip: ""} in network mk-no-preload-368813: {Iface:virbr4 ExpiryTime:2024-04-16 18:36:59 +0000 UTC Type:0 Mac:52:54:00:f7:61:eb Iaid: IPaddr:192.168.72.33 Prefix:24 Hostname:no-preload-368813 Clientid:01:52:54:00:f7:61:eb}
	I0416 17:37:34.586309   53724 main.go:141] libmachine: (no-preload-368813) DBG | domain no-preload-368813 has defined IP address 192.168.72.33 and MAC address 52:54:00:f7:61:eb in network mk-no-preload-368813
	I0416 17:37:34.586515   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHPort
	I0416 17:37:34.586905   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHKeyPath
	I0416 17:37:34.587115   53724 main.go:141] libmachine: (no-preload-368813) Calling .GetSSHUsername
	I0416 17:37:34.587295   53724 sshutil.go:53] new ssh client: &{IP:192.168.72.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/no-preload-368813/id_rsa Username:docker}
	I0416 17:37:34.696222   53724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:37:34.719179   53724 node_ready.go:35] waiting up to 6m0s for node "no-preload-368813" to be "Ready" ...
	I0416 17:37:34.782980   53724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:37:34.798957   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 17:37:34.798986   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 17:37:34.837727   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 17:37:34.837753   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 17:37:34.840957   53724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:37:34.879657   53724 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 17:37:34.879676   53724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 17:37:34.934346   53724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 17:37:35.223556   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.223578   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.223889   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.223904   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.223913   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.223920   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.223930   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.224159   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.224181   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.224198   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.229835   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.229852   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.230093   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.230105   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.230109   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.893916   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.893935   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894076   53724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.053083319s)
	I0416 17:37:35.894130   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.894147   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894316   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894332   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894337   53724 main.go:141] libmachine: (no-preload-368813) DBG | Closing plugin on server side
	I0416 17:37:35.894362   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894374   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894382   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.894389   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894340   53724 main.go:141] libmachine: Making call to close driver server
	I0416 17:37:35.894460   53724 main.go:141] libmachine: (no-preload-368813) Calling .Close
	I0416 17:37:35.894597   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894611   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894673   53724 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:37:35.894687   53724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:37:35.894705   53724 addons.go:470] Verifying addon metrics-server=true in "no-preload-368813"
	I0416 17:37:35.897547   53724 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 17:37:35.898886   53724 addons.go:505] duration metric: took 1.414544018s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 17:37:36.722873   53724 node_ready.go:53] node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:39.223607   53724 node_ready.go:53] node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:37.565118   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:40.064645   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:41.722887   53724 node_ready.go:53] node "no-preload-368813" has status "Ready":"False"
	I0416 17:37:42.225863   53724 node_ready.go:49] node "no-preload-368813" has status "Ready":"True"
	I0416 17:37:42.225883   53724 node_ready.go:38] duration metric: took 7.506668596s for node "no-preload-368813" to be "Ready" ...
	I0416 17:37:42.225891   53724 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:37:42.232019   53724 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:42.239399   53724 pod_ready.go:92] pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:42.239424   53724 pod_ready.go:81] duration metric: took 7.382463ms for pod "coredns-7db6d8ff4d-69lpx" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:42.239434   53724 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:44.245133   53724 pod_ready.go:102] pod "etcd-no-preload-368813" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:42.564211   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:44.564866   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:45.746505   53724 pod_ready.go:92] pod "etcd-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.746524   53724 pod_ready.go:81] duration metric: took 3.507082575s for pod "etcd-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.746533   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.751714   53724 pod_ready.go:92] pod "kube-apiserver-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.751735   53724 pod_ready.go:81] duration metric: took 5.194687ms for pod "kube-apiserver-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.751744   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.757023   53724 pod_ready.go:92] pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.757044   53724 pod_ready.go:81] duration metric: took 5.292895ms for pod "kube-controller-manager-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.757055   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.762143   53724 pod_ready.go:92] pod "kube-proxy-jtn9f" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.762160   53724 pod_ready.go:81] duration metric: took 5.099368ms for pod "kube-proxy-jtn9f" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.762168   53724 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.824087   53724 pod_ready.go:92] pod "kube-scheduler-no-preload-368813" in "kube-system" namespace has status "Ready":"True"
	I0416 17:37:45.824114   53724 pod_ready.go:81] duration metric: took 61.936492ms for pod "kube-scheduler-no-preload-368813" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:45.824127   53724 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace to be "Ready" ...
	I0416 17:37:47.833773   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:47.064361   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:49.065629   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:50.332513   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:52.829819   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:51.564287   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:53.565257   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:54.832367   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:57.333539   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:56.063366   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:58.064649   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:37:59.830643   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:01.830706   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:03.831546   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:00.564098   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:02.564321   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:05.064376   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:06.332358   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:08.332809   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:07.066411   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:09.564507   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:10.335688   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:12.831165   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:12.065479   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:14.564685   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:14.831349   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:17.334921   53724 pod_ready.go:102] pod "metrics-server-569cc877fc-tt8vp" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:16.565159   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:19.064669   53759 pod_ready.go:102] pod "metrics-server-57f55c9bc5-b8sw5" in "kube-system" namespace has status "Ready":"False"
	I0416 17:38:21.456413   52649 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 17:38:21.456505   52649 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 17:38:21.458335   52649 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 17:38:21.458412   52649 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:38:21.458508   52649 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:38:21.458643   52649 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:38:21.458785   52649 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:38:21.458894   52649 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:38:21.460865   52649 out.go:204]   - Generating certificates and keys ...
	I0416 17:38:21.460958   52649 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:38:21.461049   52649 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:38:21.461155   52649 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 17:38:21.461246   52649 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 17:38:21.461344   52649 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 17:38:21.461405   52649 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 17:38:21.461459   52649 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 17:38:21.461510   52649 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 17:38:21.461577   52649 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 17:38:21.461655   52649 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 17:38:21.461693   52649 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 17:38:21.461742   52649 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:38:21.461785   52649 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:38:21.461863   52649 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:38:21.461929   52649 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:38:21.462002   52649 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:38:21.462136   52649 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:38:21.462265   52649 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:38:21.462335   52649 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:38:21.462420   52649 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:38:21.463927   52649 out.go:204]   - Booting up control plane ...
	I0416 17:38:21.464008   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:38:21.464082   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:38:21.464158   52649 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:38:21.464243   52649 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:38:21.464465   52649 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:38:21.464563   52649 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 17:38:21.464669   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.464832   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.464919   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465080   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465137   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465369   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465440   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465617   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465696   52649 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 17:38:21.465892   52649 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 17:38:21.465907   52649 kubeadm.go:309] 
	I0416 17:38:21.465940   52649 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 17:38:21.465975   52649 kubeadm.go:309] 		timed out waiting for the condition
	I0416 17:38:21.465982   52649 kubeadm.go:309] 
	I0416 17:38:21.466011   52649 kubeadm.go:309] 	This error is likely caused by:
	I0416 17:38:21.466040   52649 kubeadm.go:309] 		- The kubelet is not running
	I0416 17:38:21.466153   52649 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 17:38:21.466164   52649 kubeadm.go:309] 
	I0416 17:38:21.466251   52649 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 17:38:21.466289   52649 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 17:38:21.466329   52649 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 17:38:21.466340   52649 kubeadm.go:309] 
	I0416 17:38:21.466452   52649 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 17:38:21.466521   52649 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 17:38:21.466529   52649 kubeadm.go:309] 
	I0416 17:38:21.466622   52649 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 17:38:21.466695   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 17:38:21.466765   52649 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 17:38:21.466830   52649 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 17:38:21.466852   52649 kubeadm.go:309] 
	I0416 17:38:21.466885   52649 kubeadm.go:393] duration metric: took 8m3.560726976s to StartCluster
	I0416 17:38:21.466921   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 17:38:21.466981   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 17:38:21.517447   52649 cri.go:89] found id: ""
	I0416 17:38:21.517474   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.517485   52649 logs.go:278] No container was found matching "kube-apiserver"
	I0416 17:38:21.517493   52649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 17:38:21.517556   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 17:38:21.558224   52649 cri.go:89] found id: ""
	I0416 17:38:21.558250   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.558260   52649 logs.go:278] No container was found matching "etcd"
	I0416 17:38:21.558267   52649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 17:38:21.558326   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 17:38:21.608680   52649 cri.go:89] found id: ""
	I0416 17:38:21.608712   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.608727   52649 logs.go:278] No container was found matching "coredns"
	I0416 17:38:21.608735   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 17:38:21.608786   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 17:38:21.648819   52649 cri.go:89] found id: ""
	I0416 17:38:21.648860   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.648867   52649 logs.go:278] No container was found matching "kube-scheduler"
	I0416 17:38:21.648873   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 17:38:21.648917   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 17:38:21.689263   52649 cri.go:89] found id: ""
	I0416 17:38:21.689300   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.689310   52649 logs.go:278] No container was found matching "kube-proxy"
	I0416 17:38:21.689317   52649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 17:38:21.689374   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 17:38:21.729665   52649 cri.go:89] found id: ""
	I0416 17:38:21.729694   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.729703   52649 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 17:38:21.729709   52649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 17:38:21.729755   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 17:38:21.768070   52649 cri.go:89] found id: ""
	I0416 17:38:21.768096   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.768103   52649 logs.go:278] No container was found matching "kindnet"
	I0416 17:38:21.768109   52649 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 17:38:21.768158   52649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 17:38:21.803401   52649 cri.go:89] found id: ""
	I0416 17:38:21.803425   52649 logs.go:276] 0 containers: []
	W0416 17:38:21.803435   52649 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 17:38:21.803446   52649 logs.go:123] Gathering logs for kubelet ...
	I0416 17:38:21.803461   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 17:38:21.859787   52649 logs.go:123] Gathering logs for dmesg ...
	I0416 17:38:21.859820   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 17:38:21.874861   52649 logs.go:123] Gathering logs for describe nodes ...
	I0416 17:38:21.874887   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 17:38:21.962673   52649 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 17:38:21.962700   52649 logs.go:123] Gathering logs for CRI-O ...
	I0416 17:38:21.962713   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 17:38:22.072141   52649 logs.go:123] Gathering logs for container status ...
	I0416 17:38:22.072172   52649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0416 17:38:22.120555   52649 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 17:38:22.120603   52649 out.go:239] * 
	W0416 17:38:22.120651   52649 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:38:22.120675   52649 out.go:239] * 
	W0416 17:38:22.121636   52649 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:38:22.125185   52649 out.go:177] 
	W0416 17:38:22.126349   52649 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 17:38:22.126406   52649 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 17:38:22.126429   52649 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 17:38:22.127951   52649 out.go:177] 
	
	
	==> CRI-O <==
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.141299561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289103141274389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dbe9d9a-bcea-4978-8611-4b9f4431b46d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.141935894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95c8510e-d4b9-4f54-abdf-f600763d656c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.142009056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95c8510e-d4b9-4f54-abdf-f600763d656c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.142054100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=95c8510e-d4b9-4f54-abdf-f600763d656c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.179701803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fa8c01a-806a-4096-b02b-18362002e039 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.179811573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fa8c01a-806a-4096-b02b-18362002e039 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.181068126Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b89c5b0a-bec0-4162-a57c-3d766aecb8ad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.181591388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289103181561257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b89c5b0a-bec0-4162-a57c-3d766aecb8ad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.182078473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e044ed5a-8d1f-4f18-92f8-c9296138b37c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.182150730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e044ed5a-8d1f-4f18-92f8-c9296138b37c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.182191615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e044ed5a-8d1f-4f18-92f8-c9296138b37c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.218205914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a67d563-95a0-4f48-9fd6-18a0a3406ffa name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.218307869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a67d563-95a0-4f48-9fd6-18a0a3406ffa name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.219794551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4651ee12-55fa-4e00-b196-591ce80734e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.220219311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289103220196374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4651ee12-55fa-4e00-b196-591ce80734e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.221184331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45da875e-55ba-40de-a8da-086316b1fe3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.221272710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45da875e-55ba-40de-a8da-086316b1fe3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.221309427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=45da875e-55ba-40de-a8da-086316b1fe3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.258958030Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3e41fd0-6d96-48cc-923b-2ad1b6b225b0 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.259061843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3e41fd0-6d96-48cc-923b-2ad1b6b225b0 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.260846896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66399d39-e4a5-4d88-ab24-99ed90676331 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.261267259Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289103261241929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66399d39-e4a5-4d88-ab24-99ed90676331 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.261976279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea31afc8-c75c-4044-9729-381784212e46 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.262055318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea31afc8-c75c-4044-9729-381784212e46 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:38:23 old-k8s-version-795352 crio[644]: time="2024-04-16 17:38:23.262093102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ea31afc8-c75c-4044-9729-381784212e46 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr16 17:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052410] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043185] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.634593] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr16 17:30] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.551816] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.379764] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.063374] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074890] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.187967] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.157390] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.275173] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.631339] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.063173] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.875212] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +14.495355] kauditd_printk_skb: 46 callbacks suppressed
	[Apr16 17:34] systemd-fstab-generator[5060]: Ignoring "noauto" option for root device
	[Apr16 17:36] systemd-fstab-generator[5351]: Ignoring "noauto" option for root device
	[  +0.068573] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 17:38:23 up 8 min,  0 users,  load average: 0.04, 0.14, 0.08
	Linux old-k8s-version-795352 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: goroutine 155 [sleep]:
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: time.Sleep(0xf2e25)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /usr/local/go/src/runtime/time.go:188 +0xbf
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.(*rudimentaryErrorBackoff).OnError(0xc0000acba0, 0x4f04d00, 0xc000b7b1d0)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:133 +0xfa
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleError(0x4f04d00, 0xc000b7b1d0)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc0008d2c40, 0x4f04d00, 0xc000b7b180)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0008556f0)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bd3ef0, 0x4f0ac20, 0xc000977b30, 0x1, 0xc00009e0c0)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0008d2c40, 0xc00009e0c0)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b58800, 0xc000b70ec0)
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 16 17:38:23 old-k8s-version-795352 kubelet[5530]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 16 17:38:23 old-k8s-version-795352 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 16 17:38:23 old-k8s-version-795352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (263.470209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-795352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (513.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813: exit status 3 (3.199659079s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:31:20.289221   53503 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.33:22: connect: no route to host
	E0416 17:31:20.289242   53503 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.33:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-368813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-368813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152382858s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.33:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-368813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813: exit status 3 (3.063496126s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:31:29.505131   53664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.33:22: connect: no route to host
	E0416 17:31:29.505149   53664 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.33:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-368813" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869: exit status 3 (3.166551089s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:31:21.025213   53532 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.141:22: connect: no route to host
	E0416 17:31:21.025236   53532 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.141:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-512869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-512869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153048191s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.141:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-512869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869: exit status 3 (3.062299685s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:31:30.241140   53694 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.141:22: connect: no route to host
	E0416 17:31:30.241172   53694 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.141:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-512869" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:38:26.938029   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:42:03.890274   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:42:10.029880   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:43:33.078281   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:47:03.889767   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:47:10.030577   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (236.464354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-795352" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (231.198695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-795352 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC | 16 Apr 24 17:38 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:38 UTC | 16 Apr 24 17:38 UTC |
	| start   | -p stopped-upgrade-446675                              | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:38 UTC | 16 Apr 24 17:39 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	| stop    | stopped-upgrade-446675 stop                            | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:39 UTC | 16 Apr 24 17:39 UTC |
	| start   | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:39 UTC | 16 Apr 24 17:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:40 UTC |
	| start   | -p pause-970622 --memory=2048                          | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:42 UTC |
	|         | --install-addons=false                                 |                              |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:42 UTC | 16 Apr 24 17:43 UTC |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:44 UTC |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-304316  | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC | 16 Apr 24 17:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-304316       | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:46:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:46:56.791301   59445 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:46:56.791849   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.791869   59445 out.go:304] Setting ErrFile to fd 2...
	I0416 17:46:56.791877   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.792352   59445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:46:56.793181   59445 out.go:298] Setting JSON to false
	I0416 17:46:56.794302   59445 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5369,"bootTime":1713284248,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:46:56.794364   59445 start.go:139] virtualization: kvm guest
	I0416 17:46:56.796934   59445 out.go:177] * [default-k8s-diff-port-304316] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:46:56.798418   59445 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:46:56.798451   59445 notify.go:220] Checking for updates...
	I0416 17:46:56.799763   59445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:46:56.801294   59445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:46:56.802621   59445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:46:56.803945   59445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:46:56.805309   59445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:46:56.807263   59445 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:46:56.807849   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.807910   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.822814   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0416 17:46:56.823221   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.823677   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.823699   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.823980   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.824113   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.824309   59445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:46:56.824572   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.824603   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.839091   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0416 17:46:56.839441   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.839889   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.839915   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.840218   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.840429   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.875588   59445 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:46:56.876934   59445 start.go:297] selected driver: kvm2
	I0416 17:46:56.876949   59445 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.877057   59445 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:46:56.877720   59445 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.877855   59445 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:46:56.891935   59445 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:46:56.892284   59445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:46:56.892355   59445 cni.go:84] Creating CNI manager for ""
	I0416 17:46:56.892367   59445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:46:56.892408   59445 start.go:340] cluster config:
	{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.892493   59445 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.894869   59445 out.go:177] * Starting "default-k8s-diff-port-304316" primary control-plane node in "default-k8s-diff-port-304316" cluster
	I0416 17:46:56.896238   59445 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:46:56.896274   59445 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:46:56.896292   59445 cache.go:56] Caching tarball of preloaded images
	I0416 17:46:56.896377   59445 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:46:56.896392   59445 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:46:56.896522   59445 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/config.json ...
	I0416 17:46:56.896735   59445 start.go:360] acquireMachinesLock for default-k8s-diff-port-304316: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:46:56.896788   59445 start.go:364] duration metric: took 28.964µs to acquireMachinesLock for "default-k8s-diff-port-304316"
	I0416 17:46:56.896810   59445 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:46:56.896824   59445 fix.go:54] fixHost starting: 
	I0416 17:46:56.897218   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.897257   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.910980   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0416 17:46:56.911374   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.911838   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.911861   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.912201   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.912387   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.912575   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:46:56.914179   59445 fix.go:112] recreateIfNeeded on default-k8s-diff-port-304316: state=Running err=<nil>
	W0416 17:46:56.914196   59445 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:46:56.916138   59445 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-304316" VM ...
	I0416 17:46:56.917401   59445 machine.go:94] provisionDockerMachine start ...
	I0416 17:46:56.917423   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.917604   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:46:56.919801   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920180   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:43:26 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:46:56.920217   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920347   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:46:56.920540   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920688   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920819   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:46:56.920959   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:46:56.921119   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:46:56.921129   59445 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:46:59.809186   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:02.881077   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:08.961238   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:12.033053   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:18.113089   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:21.185113   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	
	
	==> CRI-O <==
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.664338997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289644664310601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da549a6f-6bd9-4ce9-bf63-117d00db0ac9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.665100437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efa9b22e-9b1f-440b-914f-5f9ba0ea1a17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.665151899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efa9b22e-9b1f-440b-914f-5f9ba0ea1a17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.665187754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=efa9b22e-9b1f-440b-914f-5f9ba0ea1a17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.701041827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2a011a1-1c51-4a47-ad79-ad64f8cbafdf name=/runtime.v1.RuntimeService/Version
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.701116903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2a011a1-1c51-4a47-ad79-ad64f8cbafdf name=/runtime.v1.RuntimeService/Version
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.702392086Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ade9962-719c-4faf-9cd4-3a2e082812d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.702861879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289644702828821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ade9962-719c-4faf-9cd4-3a2e082812d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.703351385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4a1114c-07d9-4ee3-933f-9e68f4118de2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.703403814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4a1114c-07d9-4ee3-933f-9e68f4118de2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.703438238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e4a1114c-07d9-4ee3-933f-9e68f4118de2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.740060362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0da8e21-a1c5-4897-8ebb-318a3817fda7 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.740132353Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0da8e21-a1c5-4897-8ebb-318a3817fda7 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.741892018Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47bb09b4-14d0-4ccc-980c-5148e830ffab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.742258329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289644742235776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47bb09b4-14d0-4ccc-980c-5148e830ffab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.742925703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f736501-1aa8-4beb-ac4d-0e8ac86b2644 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.742981815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f736501-1aa8-4beb-ac4d-0e8ac86b2644 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.743015620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7f736501-1aa8-4beb-ac4d-0e8ac86b2644 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.780634234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db154159-0a97-43c8-9e77-6688ede6ff36 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.780704721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db154159-0a97-43c8-9e77-6688ede6ff36 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.782158228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f28e62ca-3a45-4a92-85f4-16d4672b0424 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.782635778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289644782598059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f28e62ca-3a45-4a92-85f4-16d4672b0424 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.783148261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ecc45da-c2c6-4a1d-bf4c-6c656d57206d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.783212226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ecc45da-c2c6-4a1d-bf4c-6c656d57206d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:47:24 old-k8s-version-795352 crio[644]: time="2024-04-16 17:47:24.783268952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4ecc45da-c2c6-4a1d-bf4c-6c656d57206d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr16 17:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052410] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043185] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.634593] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr16 17:30] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.551816] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.379764] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.063374] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074890] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.187967] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.157390] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.275173] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.631339] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.063173] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.875212] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +14.495355] kauditd_printk_skb: 46 callbacks suppressed
	[Apr16 17:34] systemd-fstab-generator[5060]: Ignoring "noauto" option for root device
	[Apr16 17:36] systemd-fstab-generator[5351]: Ignoring "noauto" option for root device
	[  +0.068573] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 17:47:24 up 17 min,  0 users,  load average: 0.04, 0.03, 0.04
	Linux old-k8s-version-795352 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000895ef0)
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009b7ef0, 0x4f0ac20, 0xc0003aed70, 0x1, 0xc0001000c0)
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000914c40, 0xc0001000c0)
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00094f550, 0xc00000f180)
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 16 17:47:22 old-k8s-version-795352 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 16 17:47:22 old-k8s-version-795352 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 16 17:47:22 old-k8s-version-795352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 16 17:47:23 old-k8s-version-795352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Apr 16 17:47:23 old-k8s-version-795352 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 16 17:47:23 old-k8s-version-795352 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 16 17:47:23 old-k8s-version-795352 kubelet[6536]: I0416 17:47:23.553340    6536 server.go:416] Version: v1.20.0
	Apr 16 17:47:23 old-k8s-version-795352 kubelet[6536]: I0416 17:47:23.553713    6536 server.go:837] Client rotation is on, will bootstrap in background
	Apr 16 17:47:23 old-k8s-version-795352 kubelet[6536]: I0416 17:47:23.555630    6536 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 16 17:47:23 old-k8s-version-795352 kubelet[6536]: I0416 17:47:23.556798    6536 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 16 17:47:23 old-k8s-version-795352 kubelet[6536]: W0416 17:47:23.556810    6536 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (226.630541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-795352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512869 -n embed-certs-512869
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-16 17:50:52.299275446 +0000 UTC m=+5487.697951970
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-512869 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-512869 logs -n 25: (1.271277151s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC | 16 Apr 24 17:38 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:38 UTC | 16 Apr 24 17:38 UTC |
	| start   | -p stopped-upgrade-446675                              | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:38 UTC | 16 Apr 24 17:39 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	| stop    | stopped-upgrade-446675 stop                            | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:39 UTC | 16 Apr 24 17:39 UTC |
	| start   | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:39 UTC | 16 Apr 24 17:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:40 UTC |
	| start   | -p pause-970622 --memory=2048                          | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:42 UTC |
	|         | --install-addons=false                                 |                              |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:42 UTC | 16 Apr 24 17:43 UTC |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:44 UTC |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-304316  | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC | 16 Apr 24 17:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-304316       | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:46:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:46:56.791301   59445 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:46:56.791849   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.791869   59445 out.go:304] Setting ErrFile to fd 2...
	I0416 17:46:56.791877   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.792352   59445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:46:56.793181   59445 out.go:298] Setting JSON to false
	I0416 17:46:56.794302   59445 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5369,"bootTime":1713284248,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:46:56.794364   59445 start.go:139] virtualization: kvm guest
	I0416 17:46:56.796934   59445 out.go:177] * [default-k8s-diff-port-304316] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:46:56.798418   59445 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:46:56.798451   59445 notify.go:220] Checking for updates...
	I0416 17:46:56.799763   59445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:46:56.801294   59445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:46:56.802621   59445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:46:56.803945   59445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:46:56.805309   59445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:46:56.807263   59445 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:46:56.807849   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.807910   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.822814   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0416 17:46:56.823221   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.823677   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.823699   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.823980   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.824113   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.824309   59445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:46:56.824572   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.824603   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.839091   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0416 17:46:56.839441   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.839889   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.839915   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.840218   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.840429   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.875588   59445 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:46:56.876934   59445 start.go:297] selected driver: kvm2
	I0416 17:46:56.876949   59445 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.877057   59445 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:46:56.877720   59445 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.877855   59445 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:46:56.891935   59445 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:46:56.892284   59445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:46:56.892355   59445 cni.go:84] Creating CNI manager for ""
	I0416 17:46:56.892367   59445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:46:56.892408   59445 start.go:340] cluster config:
	{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.892493   59445 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.894869   59445 out.go:177] * Starting "default-k8s-diff-port-304316" primary control-plane node in "default-k8s-diff-port-304316" cluster
	I0416 17:46:56.896238   59445 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:46:56.896274   59445 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:46:56.896292   59445 cache.go:56] Caching tarball of preloaded images
	I0416 17:46:56.896377   59445 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:46:56.896392   59445 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:46:56.896522   59445 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/config.json ...
	I0416 17:46:56.896735   59445 start.go:360] acquireMachinesLock for default-k8s-diff-port-304316: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:46:56.896788   59445 start.go:364] duration metric: took 28.964µs to acquireMachinesLock for "default-k8s-diff-port-304316"
	I0416 17:46:56.896810   59445 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:46:56.896824   59445 fix.go:54] fixHost starting: 
	I0416 17:46:56.897218   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.897257   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.910980   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0416 17:46:56.911374   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.911838   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.911861   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.912201   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.912387   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.912575   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:46:56.914179   59445 fix.go:112] recreateIfNeeded on default-k8s-diff-port-304316: state=Running err=<nil>
	W0416 17:46:56.914196   59445 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:46:56.916138   59445 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-304316" VM ...
	I0416 17:46:56.917401   59445 machine.go:94] provisionDockerMachine start ...
	I0416 17:46:56.917423   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.917604   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:46:56.919801   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920180   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:43:26 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:46:56.920217   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920347   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:46:56.920540   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920688   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920819   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:46:56.920959   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:46:56.921119   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:46:56.921129   59445 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:46:59.809186   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:02.881077   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:08.961238   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:12.033053   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:18.113089   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:21.185113   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:30.305165   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:33.377208   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:39.457128   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:42.529153   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:48.609097   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:51.685040   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:57.761077   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:00.833230   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:06.913045   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:09.985120   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:16.065075   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:19.141101   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:25.221118   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:28.289135   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:34.369068   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:37.445091   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:43.521090   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:46.593167   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:52.673093   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:55.745116   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:01.825195   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:04.897276   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:10.977087   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:14.049089   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:20.129139   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:23.201163   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:29.281110   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:32.353103   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:38.433052   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:41.505072   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:47.585081   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:50.657107   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:56.737202   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:59.809144   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:05.889152   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:08.965116   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:15.041030   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:18.117063   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:24.193083   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:27.265045   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:33.345075   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:36.417221   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:42.497055   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:45.573055   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:51.649098   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	
	
	==> CRI-O <==
	Apr 16 17:50:52 embed-certs-512869 crio[734]: time="2024-04-16 17:50:52.990107042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289852990085487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52f28f26-7c9c-4330-acc3-6e99cce3ebb0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:52 embed-certs-512869 crio[734]: time="2024-04-16 17:50:52.990804701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f15f71de-933f-40ae-881b-42d93f785a4a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:52 embed-certs-512869 crio[734]: time="2024-04-16 17:50:52.990881351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f15f71de-933f-40ae-881b-42d93f785a4a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:52 embed-certs-512869 crio[734]: time="2024-04-16 17:50:52.991089863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7,PodSandboxId:974b8077bc711a4508d6720b7ef2a81cb611d918065baf9897d284bbde430407,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289311191093658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913ab65e-4692-43fe-9160-4680d40d45ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea9ed6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e,PodSandboxId:59378403dd979415b25c4d034f7b475f254b7bfc96466791ee420b471155465c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310465828038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mbxnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de57d75-6597-4fa8-bb38-f239a733477a,},Annotations:map[string]string{io.kubernetes.container.hash: aa6bebd1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0,PodSandboxId:7325ee824c71a22611e7526575d587155bbaf9fd7de8629d048b326fe93d050a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310254234646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-slfsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b3b48ec-1ccb-4587-b9a0-75d6244dd3cf,},Annotations:map[string]string{io.kubernetes.container.hash: be012b20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766,PodSandboxId:2c6508d1a8ff6929e82baa662d8e7dce78ae927adceaf5305c1d64dc7f73daa6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1713289309493497773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxdwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a03621-b707-49f1-a9f5-a8a3c73558eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d59805,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773,PodSandboxId:ac0cd97d0c3fe59b3d09a78d09401de6b06c6b288480a5b8658ffd2ed6ed157b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289289849247903,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3094b63b6dd171a81c08f1af4f0f2593,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0,PodSandboxId:5d45bf703cee98e0182004b3c963f95f83000314d4650c785de6fd782a03ad6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289289847641241,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e8bacc4d98e0be0efa2f5fdaa22e7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097,PodSandboxId:bba65a63f6b06d90593cd0a518fa88866be2677f1d6605412fb0b22967fdd8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289289781406846,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b66a67eaa290f5599a2d92f87e20a156,},Annotations:map[string]string{io.kubernetes.container.hash: a119b7b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9,PodSandboxId:a8526582712ad2a7267ad5205b8ed1839b0a4dc25526dcac84d88e9ad222fa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289289728089604,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 038ac1f610eb129ba18a8faf62ee9d65,},Annotations:map[string]string{io.kubernetes.container.hash: 4ceeca6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f15f71de-933f-40ae-881b-42d93f785a4a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.033641173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ff5f32a-401a-4139-9085-4d7b839ac37a name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.033887491Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ff5f32a-401a-4139-9085-4d7b839ac37a name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.035182684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6043391-6027-47c5-b13a-7ed13aaac336 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.035872099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289853035849026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6043391-6027-47c5-b13a-7ed13aaac336 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.037023416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8494cce4-49cf-4036-836a-f2932f3af9c8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.037230292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8494cce4-49cf-4036-836a-f2932f3af9c8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.037528279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7,PodSandboxId:974b8077bc711a4508d6720b7ef2a81cb611d918065baf9897d284bbde430407,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289311191093658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913ab65e-4692-43fe-9160-4680d40d45ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea9ed6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e,PodSandboxId:59378403dd979415b25c4d034f7b475f254b7bfc96466791ee420b471155465c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310465828038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mbxnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de57d75-6597-4fa8-bb38-f239a733477a,},Annotations:map[string]string{io.kubernetes.container.hash: aa6bebd1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0,PodSandboxId:7325ee824c71a22611e7526575d587155bbaf9fd7de8629d048b326fe93d050a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310254234646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-slfsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b3b48ec-1ccb-4587-b9a0-75d6244dd3cf,},Annotations:map[string]string{io.kubernetes.container.hash: be012b20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766,PodSandboxId:2c6508d1a8ff6929e82baa662d8e7dce78ae927adceaf5305c1d64dc7f73daa6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1713289309493497773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxdwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a03621-b707-49f1-a9f5-a8a3c73558eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d59805,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773,PodSandboxId:ac0cd97d0c3fe59b3d09a78d09401de6b06c6b288480a5b8658ffd2ed6ed157b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289289849247903,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3094b63b6dd171a81c08f1af4f0f2593,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0,PodSandboxId:5d45bf703cee98e0182004b3c963f95f83000314d4650c785de6fd782a03ad6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289289847641241,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e8bacc4d98e0be0efa2f5fdaa22e7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097,PodSandboxId:bba65a63f6b06d90593cd0a518fa88866be2677f1d6605412fb0b22967fdd8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289289781406846,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b66a67eaa290f5599a2d92f87e20a156,},Annotations:map[string]string{io.kubernetes.container.hash: a119b7b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9,PodSandboxId:a8526582712ad2a7267ad5205b8ed1839b0a4dc25526dcac84d88e9ad222fa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289289728089604,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 038ac1f610eb129ba18a8faf62ee9d65,},Annotations:map[string]string{io.kubernetes.container.hash: 4ceeca6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8494cce4-49cf-4036-836a-f2932f3af9c8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.080832766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92b6717d-82d7-49ca-9ce1-3ee09fa395a2 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.080906512Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92b6717d-82d7-49ca-9ce1-3ee09fa395a2 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.082256271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1845f4f4-755c-4a50-80c8-2b1813c38c25 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.082807190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289853082783595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1845f4f4-755c-4a50-80c8-2b1813c38c25 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.083717583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aeb7081e-d63a-4115-86c7-7467783e1757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.083767738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aeb7081e-d63a-4115-86c7-7467783e1757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.083980819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7,PodSandboxId:974b8077bc711a4508d6720b7ef2a81cb611d918065baf9897d284bbde430407,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289311191093658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913ab65e-4692-43fe-9160-4680d40d45ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea9ed6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e,PodSandboxId:59378403dd979415b25c4d034f7b475f254b7bfc96466791ee420b471155465c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310465828038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mbxnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de57d75-6597-4fa8-bb38-f239a733477a,},Annotations:map[string]string{io.kubernetes.container.hash: aa6bebd1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0,PodSandboxId:7325ee824c71a22611e7526575d587155bbaf9fd7de8629d048b326fe93d050a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310254234646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-slfsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b3b48ec-1ccb-4587-b9a0-75d6244dd3cf,},Annotations:map[string]string{io.kubernetes.container.hash: be012b20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766,PodSandboxId:2c6508d1a8ff6929e82baa662d8e7dce78ae927adceaf5305c1d64dc7f73daa6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1713289309493497773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxdwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a03621-b707-49f1-a9f5-a8a3c73558eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d59805,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773,PodSandboxId:ac0cd97d0c3fe59b3d09a78d09401de6b06c6b288480a5b8658ffd2ed6ed157b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289289849247903,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3094b63b6dd171a81c08f1af4f0f2593,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0,PodSandboxId:5d45bf703cee98e0182004b3c963f95f83000314d4650c785de6fd782a03ad6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289289847641241,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e8bacc4d98e0be0efa2f5fdaa22e7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097,PodSandboxId:bba65a63f6b06d90593cd0a518fa88866be2677f1d6605412fb0b22967fdd8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289289781406846,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b66a67eaa290f5599a2d92f87e20a156,},Annotations:map[string]string{io.kubernetes.container.hash: a119b7b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9,PodSandboxId:a8526582712ad2a7267ad5205b8ed1839b0a4dc25526dcac84d88e9ad222fa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289289728089604,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 038ac1f610eb129ba18a8faf62ee9d65,},Annotations:map[string]string{io.kubernetes.container.hash: 4ceeca6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aeb7081e-d63a-4115-86c7-7467783e1757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.118854584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb8dccef-60fa-432d-abfc-3a9426538290 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.118948856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb8dccef-60fa-432d-abfc-3a9426538290 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.120846891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b1bc56c-ac73-4252-bc21-bb0e3d389d0b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.121251669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289853121227370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b1bc56c-ac73-4252-bc21-bb0e3d389d0b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.121824126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7db3d591-0c78-40e6-b01d-32e6760b50e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.121874446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7db3d591-0c78-40e6-b01d-32e6760b50e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:53 embed-certs-512869 crio[734]: time="2024-04-16 17:50:53.122078996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7,PodSandboxId:974b8077bc711a4508d6720b7ef2a81cb611d918065baf9897d284bbde430407,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289311191093658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913ab65e-4692-43fe-9160-4680d40d45ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea9ed6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e,PodSandboxId:59378403dd979415b25c4d034f7b475f254b7bfc96466791ee420b471155465c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310465828038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mbxnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de57d75-6597-4fa8-bb38-f239a733477a,},Annotations:map[string]string{io.kubernetes.container.hash: aa6bebd1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0,PodSandboxId:7325ee824c71a22611e7526575d587155bbaf9fd7de8629d048b326fe93d050a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310254234646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-slfsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b3b48ec-1ccb-4587-b9a0-75d6244dd3cf,},Annotations:map[string]string{io.kubernetes.container.hash: be012b20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766,PodSandboxId:2c6508d1a8ff6929e82baa662d8e7dce78ae927adceaf5305c1d64dc7f73daa6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1713289309493497773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxdwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a03621-b707-49f1-a9f5-a8a3c73558eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d59805,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773,PodSandboxId:ac0cd97d0c3fe59b3d09a78d09401de6b06c6b288480a5b8658ffd2ed6ed157b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289289849247903,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3094b63b6dd171a81c08f1af4f0f2593,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0,PodSandboxId:5d45bf703cee98e0182004b3c963f95f83000314d4650c785de6fd782a03ad6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289289847641241,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e8bacc4d98e0be0efa2f5fdaa22e7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097,PodSandboxId:bba65a63f6b06d90593cd0a518fa88866be2677f1d6605412fb0b22967fdd8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289289781406846,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b66a67eaa290f5599a2d92f87e20a156,},Annotations:map[string]string{io.kubernetes.container.hash: a119b7b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9,PodSandboxId:a8526582712ad2a7267ad5205b8ed1839b0a4dc25526dcac84d88e9ad222fa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289289728089604,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 038ac1f610eb129ba18a8faf62ee9d65,},Annotations:map[string]string{io.kubernetes.container.hash: 4ceeca6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7db3d591-0c78-40e6-b01d-32e6760b50e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac3befbcd4ab5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   974b8077bc711       storage-provisioner
	d1e915af924f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   59378403dd979       coredns-76f75df574-mbxnj
	f1511852605ab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7325ee824c71a       coredns-76f75df574-slfsc
	c8545f49aed2d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   2c6508d1a8ff6       kube-proxy-vxdwg
	743f47d8b4985       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   ac0cd97d0c3fe       kube-scheduler-embed-certs-512869
	d5ec0d0568d7e       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   5d45bf703cee9       kube-controller-manager-embed-certs-512869
	487358ff90e12       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   bba65a63f6b06       kube-apiserver-embed-certs-512869
	266da1a1d2146       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   a8526582712ad       etcd-embed-certs-512869
	
	
	==> coredns [d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-512869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-512869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=embed-certs-512869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_41_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:41:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-512869
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:50:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:47:04 +0000   Tue, 16 Apr 2024 17:41:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:47:04 +0000   Tue, 16 Apr 2024 17:41:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:47:04 +0000   Tue, 16 Apr 2024 17:41:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:47:04 +0000   Tue, 16 Apr 2024 17:41:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.141
	  Hostname:    embed-certs-512869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b4244c76c9b420393249cd324acac50
	  System UUID:                0b4244c7-6c9b-4203-9324-9cd324acac50
	  Boot ID:                    18a76deb-aaf0-4212-b1c0-17d786568f1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-mbxnj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-76f75df574-slfsc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-embed-certs-512869                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-512869             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-embed-certs-512869    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-vxdwg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-embed-certs-512869             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-57f55c9bc5-bgdrb               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node embed-certs-512869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node embed-certs-512869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node embed-certs-512869 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node embed-certs-512869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node embed-certs-512869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node embed-certs-512869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s                   node-controller  Node embed-certs-512869 event: Registered Node embed-certs-512869 in Controller
	
	
	==> dmesg <==
	[  +0.052379] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043590] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.593160] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.399498] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.694225] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.716579] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.058826] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064987] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.226129] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.141948] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +0.328517] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +5.119818] systemd-fstab-generator[819]: Ignoring "noauto" option for root device
	[  +0.060448] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.056514] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +5.618353] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.961939] kauditd_printk_skb: 84 callbacks suppressed
	[Apr16 17:41] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.914775] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +4.733151] kauditd_printk_skb: 57 callbacks suppressed
	[  +3.090622] systemd-fstab-generator[3984]: Ignoring "noauto" option for root device
	[ +12.458663] systemd-fstab-generator[4174]: Ignoring "noauto" option for root device
	[  +0.139584] kauditd_printk_skb: 14 callbacks suppressed
	[Apr16 17:42] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9] <==
	{"level":"info","ts":"2024-04-16T17:41:30.137956Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.141:2380"}
	{"level":"info","ts":"2024-04-16T17:41:30.151098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2e86f29f2028fb42 switched to configuration voters=(3352633737877191490)"}
	{"level":"info","ts":"2024-04-16T17:41:30.151499Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e0b4ac6ff07e72e1","local-member-id":"2e86f29f2028fb42","added-peer-id":"2e86f29f2028fb42","added-peer-peer-urls":["https://192.168.83.141:2380"]}
	{"level":"info","ts":"2024-04-16T17:41:30.86554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2e86f29f2028fb42 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T17:41:30.865612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2e86f29f2028fb42 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T17:41:30.865651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2e86f29f2028fb42 received MsgPreVoteResp from 2e86f29f2028fb42 at term 1"}
	{"level":"info","ts":"2024-04-16T17:41:30.865664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2e86f29f2028fb42 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:41:30.86567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2e86f29f2028fb42 received MsgVoteResp from 2e86f29f2028fb42 at term 2"}
	{"level":"info","ts":"2024-04-16T17:41:30.865678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2e86f29f2028fb42 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T17:41:30.865693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2e86f29f2028fb42 elected leader 2e86f29f2028fb42 at term 2"}
	{"level":"info","ts":"2024-04-16T17:41:30.867346Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2e86f29f2028fb42","local-member-attributes":"{Name:embed-certs-512869 ClientURLs:[https://192.168.83.141:2379]}","request-path":"/0/members/2e86f29f2028fb42/attributes","cluster-id":"e0b4ac6ff07e72e1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:41:30.867579Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:41:30.867833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:41:30.868388Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:41:30.868514Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T17:41:30.86863Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:41:30.873729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T17:41:30.879101Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.141:2379"}
	{"level":"info","ts":"2024-04-16T17:41:30.880857Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e0b4ac6ff07e72e1","local-member-id":"2e86f29f2028fb42","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:41:30.911458Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:41:30.913342Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:43:46.207071Z","caller":"traceutil/trace.go:171","msg":"trace[1947380081] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:622; }","duration":"189.488777ms","start":"2024-04-16T17:43:46.017544Z","end":"2024-04-16T17:43:46.207033Z","steps":["trace[1947380081] 'read index received'  (duration: 189.345738ms)","trace[1947380081] 'applied index is now lower than readState.Index'  (duration: 142.641µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:43:46.207518Z","caller":"traceutil/trace.go:171","msg":"trace[1679701649] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"196.075546ms","start":"2024-04-16T17:43:46.011428Z","end":"2024-04-16T17:43:46.207504Z","steps":["trace[1679701649] 'process raft request'  (duration: 195.5056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:43:46.207646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.028636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:43:46.208014Z","caller":"traceutil/trace.go:171","msg":"trace[1892107578] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:584; }","duration":"190.476738ms","start":"2024-04-16T17:43:46.017521Z","end":"2024-04-16T17:43:46.207997Z","steps":["trace[1892107578] 'agreement among raft nodes before linearized reading'  (duration: 190.032811ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:50:53 up 14 min,  0 users,  load average: 0.04, 0.20, 0.18
	Linux embed-certs-512869 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097] <==
	I0416 17:44:51.145552       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:46:32.764897       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:46:32.765020       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 17:46:33.765123       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:46:33.765365       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:46:33.765412       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:46:33.765574       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:46:33.765661       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:46:33.766649       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:47:33.765912       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:47:33.766000       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:47:33.766014       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:47:33.767392       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:47:33.767496       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:47:33.767557       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:49:33.766253       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:49:33.766823       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:49:33.766864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:49:33.768610       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:49:33.768694       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:49:33.768705       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0] <==
	I0416 17:45:21.635238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="489.35µs"
	E0416 17:45:48.000809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:45:48.479626       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:46:18.007497       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:46:18.489051       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:46:48.013873       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:46:48.497883       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:47:18.022093       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:47:18.508122       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:47:48.027520       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:47:48.516907       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 17:47:52.637539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="358.884µs"
	I0416 17:48:04.637774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="149.646µs"
	E0416 17:48:18.033207       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:48:18.526607       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:48:48.039394       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:48:48.536775       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:49:18.045009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:49:18.547162       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:49:48.050651       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:49:48.555869       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:50:18.057447       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:50:18.564012       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:50:48.063182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:50:48.572149       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766] <==
	I0416 17:41:49.808345       1 server_others.go:72] "Using iptables proxy"
	I0416 17:41:49.831670       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.83.141"]
	I0416 17:41:49.937528       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:41:49.937581       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:41:49.937599       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:41:49.946025       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:41:49.946229       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:41:49.946270       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:41:49.947950       1 config.go:188] "Starting service config controller"
	I0416 17:41:49.947994       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:41:49.948018       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:41:49.948022       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:41:49.948508       1 config.go:315] "Starting node config controller"
	I0416 17:41:49.948539       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:41:50.049531       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:41:50.049579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:41:50.052987       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773] <==
	W0416 17:41:33.745563       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:41:33.745636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:41:33.756762       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:41:33.756827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:41:33.809817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:41:33.810071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:41:33.930731       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:41:33.930785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:41:33.945451       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:41:33.945519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:41:33.962450       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:41:33.962562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:41:34.044183       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:41:34.044341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:41:34.062623       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 17:41:34.062726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 17:41:34.078423       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:41:34.078551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:41:34.086573       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:41:34.086632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:41:34.097948       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:41:34.098008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:41:34.339538       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:41:34.340206       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 17:41:37.387346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:48:36 embed-certs-512869 kubelet[3991]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:48:36 embed-certs-512869 kubelet[3991]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:48:36 embed-certs-512869 kubelet[3991]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:48:36 embed-certs-512869 kubelet[3991]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:48:39 embed-certs-512869 kubelet[3991]: E0416 17:48:39.615998    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:48:52 embed-certs-512869 kubelet[3991]: E0416 17:48:52.617074    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:49:06 embed-certs-512869 kubelet[3991]: E0416 17:49:06.617401    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:49:17 embed-certs-512869 kubelet[3991]: E0416 17:49:17.616643    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:49:29 embed-certs-512869 kubelet[3991]: E0416 17:49:29.616667    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:49:36 embed-certs-512869 kubelet[3991]: E0416 17:49:36.670076    3991 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:49:36 embed-certs-512869 kubelet[3991]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:49:36 embed-certs-512869 kubelet[3991]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:49:36 embed-certs-512869 kubelet[3991]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:49:36 embed-certs-512869 kubelet[3991]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:49:41 embed-certs-512869 kubelet[3991]: E0416 17:49:41.616589    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:49:53 embed-certs-512869 kubelet[3991]: E0416 17:49:53.616627    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:50:05 embed-certs-512869 kubelet[3991]: E0416 17:50:05.616560    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:50:20 embed-certs-512869 kubelet[3991]: E0416 17:50:20.617993    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:50:32 embed-certs-512869 kubelet[3991]: E0416 17:50:32.617044    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:50:36 embed-certs-512869 kubelet[3991]: E0416 17:50:36.668075    3991 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:50:36 embed-certs-512869 kubelet[3991]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:50:36 embed-certs-512869 kubelet[3991]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:50:36 embed-certs-512869 kubelet[3991]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:50:36 embed-certs-512869 kubelet[3991]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:50:43 embed-certs-512869 kubelet[3991]: E0416 17:50:43.616492    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	
	
	==> storage-provisioner [ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7] <==
	I0416 17:41:51.313887       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 17:41:51.330993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 17:41:51.331099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 17:41:51.341374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 17:41:51.341843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-512869_1ef5bb23-9a50-4811-83a2-dc154541d23f!
	I0416 17:41:51.344234       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9aaf55c9-f9f7-4b96-af6c-5ba966ba2d38", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-512869_1ef5bb23-9a50-4811-83a2-dc154541d23f became leader
	I0416 17:41:51.442730       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-512869_1ef5bb23-9a50-4811-83a2-dc154541d23f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512869 -n embed-certs-512869
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-512869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bgdrb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-512869 describe pod metrics-server-57f55c9bc5-bgdrb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-512869 describe pod metrics-server-57f55c9bc5-bgdrb: exit status 1 (59.210958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bgdrb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-512869 describe pod metrics-server-57f55c9bc5-bgdrb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368813 -n no-preload-368813
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-16 17:50:58.849036895 +0000 UTC m=+5494.247713410
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-368813 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-368813 logs -n 25: (1.308754118s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC | 16 Apr 24 17:38 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:38 UTC | 16 Apr 24 17:38 UTC |
	| start   | -p stopped-upgrade-446675                              | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:38 UTC | 16 Apr 24 17:39 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	| stop    | stopped-upgrade-446675 stop                            | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:39 UTC | 16 Apr 24 17:39 UTC |
	| start   | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:39 UTC | 16 Apr 24 17:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:40 UTC |
	| start   | -p pause-970622 --memory=2048                          | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:42 UTC |
	|         | --install-addons=false                                 |                              |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:42 UTC | 16 Apr 24 17:43 UTC |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:44 UTC |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-304316  | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC | 16 Apr 24 17:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-304316       | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:46:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:46:56.791301   59445 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:46:56.791849   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.791869   59445 out.go:304] Setting ErrFile to fd 2...
	I0416 17:46:56.791877   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.792352   59445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:46:56.793181   59445 out.go:298] Setting JSON to false
	I0416 17:46:56.794302   59445 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5369,"bootTime":1713284248,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:46:56.794364   59445 start.go:139] virtualization: kvm guest
	I0416 17:46:56.796934   59445 out.go:177] * [default-k8s-diff-port-304316] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:46:56.798418   59445 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:46:56.798451   59445 notify.go:220] Checking for updates...
	I0416 17:46:56.799763   59445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:46:56.801294   59445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:46:56.802621   59445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:46:56.803945   59445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:46:56.805309   59445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:46:56.807263   59445 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:46:56.807849   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.807910   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.822814   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0416 17:46:56.823221   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.823677   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.823699   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.823980   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.824113   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.824309   59445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:46:56.824572   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.824603   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.839091   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0416 17:46:56.839441   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.839889   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.839915   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.840218   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.840429   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.875588   59445 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:46:56.876934   59445 start.go:297] selected driver: kvm2
	I0416 17:46:56.876949   59445 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.877057   59445 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:46:56.877720   59445 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.877855   59445 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:46:56.891935   59445 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:46:56.892284   59445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:46:56.892355   59445 cni.go:84] Creating CNI manager for ""
	I0416 17:46:56.892367   59445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:46:56.892408   59445 start.go:340] cluster config:
	{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.892493   59445 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.894869   59445 out.go:177] * Starting "default-k8s-diff-port-304316" primary control-plane node in "default-k8s-diff-port-304316" cluster
	I0416 17:46:56.896238   59445 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:46:56.896274   59445 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:46:56.896292   59445 cache.go:56] Caching tarball of preloaded images
	I0416 17:46:56.896377   59445 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:46:56.896392   59445 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:46:56.896522   59445 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/config.json ...
	I0416 17:46:56.896735   59445 start.go:360] acquireMachinesLock for default-k8s-diff-port-304316: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:46:56.896788   59445 start.go:364] duration metric: took 28.964µs to acquireMachinesLock for "default-k8s-diff-port-304316"
	I0416 17:46:56.896810   59445 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:46:56.896824   59445 fix.go:54] fixHost starting: 
	I0416 17:46:56.897218   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.897257   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.910980   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0416 17:46:56.911374   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.911838   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.911861   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.912201   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.912387   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.912575   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:46:56.914179   59445 fix.go:112] recreateIfNeeded on default-k8s-diff-port-304316: state=Running err=<nil>
	W0416 17:46:56.914196   59445 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:46:56.916138   59445 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-304316" VM ...
	I0416 17:46:56.917401   59445 machine.go:94] provisionDockerMachine start ...
	I0416 17:46:56.917423   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.917604   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:46:56.919801   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920180   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:43:26 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:46:56.920217   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920347   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:46:56.920540   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920688   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920819   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:46:56.920959   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:46:56.921119   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:46:56.921129   59445 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:46:59.809186   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:02.881077   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:08.961238   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:12.033053   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:18.113089   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:21.185113   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:30.305165   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:33.377208   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:39.457128   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:42.529153   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:48.609097   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:51.685040   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:57.761077   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:00.833230   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:06.913045   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:09.985120   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:16.065075   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:19.141101   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:25.221118   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:28.289135   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:34.369068   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:37.445091   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:43.521090   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:46.593167   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:52.673093   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:55.745116   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:01.825195   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:04.897276   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:10.977087   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:14.049089   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:20.129139   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:23.201163   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:29.281110   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:32.353103   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:38.433052   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:41.505072   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:47.585081   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:50.657107   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:56.737202   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:59.809144   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:05.889152   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:08.965116   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:15.041030   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:18.117063   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:24.193083   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:27.265045   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:33.345075   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:36.417221   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:42.497055   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:45.573055   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:51.649098   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:54.725050   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	
	
	==> CRI-O <==
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.535152232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289859535129026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=604c89fb-899e-4c09-9f1d-3cffa11f943f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.535852029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98d1ea71-efe2-4796-b03c-ded1a7b56dcc name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.535906047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98d1ea71-efe2-4796-b03c-ded1a7b56dcc name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.536094313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98d1ea71-efe2-4796-b03c-ded1a7b56dcc name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.584553909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e81c97bd-aaab-42b6-be4c-34bf901dfdee name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.584625199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e81c97bd-aaab-42b6-be4c-34bf901dfdee name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.587102297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c7ee449-8079-4b7f-9430-cc0efda04884 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.587568329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289859587536025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c7ee449-8079-4b7f-9430-cc0efda04884 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.588218100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23cfdb42-fcc4-43af-9283-f37057bd0da4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.588274347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23cfdb42-fcc4-43af-9283-f37057bd0da4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.588914840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23cfdb42-fcc4-43af-9283-f37057bd0da4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.634624142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=912459dc-0268-44c2-9a76-db6ccfd98663 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.634693295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=912459dc-0268-44c2-9a76-db6ccfd98663 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.636062262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8db6f97-964b-44b9-bf02-572e310c289d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.636393684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289859636374533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8db6f97-964b-44b9-bf02-572e310c289d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.637099377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b1b91e1-d14c-46a9-a215-fe8749498c11 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.637181262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b1b91e1-d14c-46a9-a215-fe8749498c11 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.637386573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b1b91e1-d14c-46a9-a215-fe8749498c11 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.679418819Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59e47417-32a4-488d-bb0a-de905cc54afc name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.679597821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59e47417-32a4-488d-bb0a-de905cc54afc name=/runtime.v1.RuntimeService/Version
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.681016374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4945c803-e2a5-416a-8141-ab13afc95b4d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.681362740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289859681340791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4945c803-e2a5-416a-8141-ab13afc95b4d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.683356250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d729d36-9318-4462-b013-03115bfc06c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.683591846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d729d36-9318-4462-b013-03115bfc06c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:50:59 no-preload-368813 crio[717]: time="2024-04-16 17:50:59.683858217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d729d36-9318-4462-b013-03115bfc06c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97a767c06231c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   4fbf28d41144a       storage-provisioner
	0697f82a83a71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   953136307c659       busybox
	00b1b1a135014       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   1b66ad477e63b       coredns-7db6d8ff4d-69lpx
	b20cd13eb5547       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      13 minutes ago      Running             kube-proxy                1                   86f1bed24d28a       kube-proxy-jtn9f
	4f65b3614ace8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   4fbf28d41144a       storage-provisioner
	11bd4705b165f       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      13 minutes ago      Running             kube-scheduler            1                   c5113c1c80b51       kube-scheduler-no-preload-368813
	f9d5271a91234       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   5ea69adaa5bd2       etcd-no-preload-368813
	936600d85bc99       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      13 minutes ago      Running             kube-controller-manager   1                   da5a81607a133       kube-controller-manager-no-preload-368813
	5157fe646abc0       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      13 minutes ago      Running             kube-apiserver            1                   8e575ab599eae       kube-apiserver-no-preload-368813
	
	
	==> coredns [00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44194 - 49085 "HINFO IN 400906379160287812.9137361871461743001. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008848714s
	
	
	==> describe nodes <==
	Name:               no-preload-368813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-368813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=no-preload-368813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_28_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:28:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-368813
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:48:15 +0000   Tue, 16 Apr 2024 17:28:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:48:15 +0000   Tue, 16 Apr 2024 17:28:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:48:15 +0000   Tue, 16 Apr 2024 17:28:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:48:15 +0000   Tue, 16 Apr 2024 17:37:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.33
	  Hostname:    no-preload-368813
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09543460441246e1b6aaf1f1552fa561
	  System UUID:                09543460-4412-46e1-b6aa-f1f1552fa561
	  Boot ID:                    9f113a53-e370-4d44-935e-83eedd02b0ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-69lpx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-368813                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-368813             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-368813    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-jtn9f                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-368813             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-tt8vp              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-368813 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-368813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-368813 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-368813 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-368813 event: Registered Node no-preload-368813 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-368813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-368813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-368813 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-368813 event: Registered Node no-preload-368813 in Controller
	
	
	==> dmesg <==
	[Apr16 17:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053473] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.983276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.605853] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr16 17:37] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.985702] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.064617] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084322] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.162468] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.164585] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.343845] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[ +17.193270] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.069977] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.016265] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +4.073693] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.605516] systemd-fstab-generator[1962]: Ignoring "noauto" option for root device
	[  +3.637313] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.821043] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6] <==
	{"level":"info","ts":"2024-04-16T17:40:18.508731Z","caller":"traceutil/trace.go:171","msg":"trace[1805294128] linearizableReadLoop","detail":"{readStateIndex:764; appliedIndex:763; }","duration":"179.947975ms","start":"2024-04-16T17:40:18.328766Z","end":"2024-04-16T17:40:18.508714Z","steps":["trace[1805294128] 'read index received'  (duration: 51.43206ms)","trace[1805294128] 'applied index is now lower than readState.Index'  (duration: 128.514468ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:40:18.508848Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.066866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:40:18.50891Z","caller":"traceutil/trace.go:171","msg":"trace[1737305033] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:698; }","duration":"180.184588ms","start":"2024-04-16T17:40:18.328715Z","end":"2024-04-16T17:40:18.508899Z","steps":["trace[1737305033] 'agreement among raft nodes before linearized reading'  (duration: 180.091845ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:40:18.50903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.686268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-368813\" ","response":"range_response_count:1 size:4699"}
	{"level":"info","ts":"2024-04-16T17:40:18.509121Z","caller":"traceutil/trace.go:171","msg":"trace[613680736] range","detail":"{range_begin:/registry/minions/no-preload-368813; range_end:; response_count:1; response_revision:698; }","duration":"105.789436ms","start":"2024-04-16T17:40:18.403308Z","end":"2024-04-16T17:40:18.509098Z","steps":["trace[613680736] 'agreement among raft nodes before linearized reading'  (duration: 105.521392ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:40:19.922025Z","caller":"traceutil/trace.go:171","msg":"trace[30032471] linearizableReadLoop","detail":"{readStateIndex:765; appliedIndex:764; }","duration":"107.824267ms","start":"2024-04-16T17:40:19.814185Z","end":"2024-04-16T17:40:19.92201Z","steps":["trace[30032471] 'read index received'  (duration: 107.648067ms)","trace[30032471] 'applied index is now lower than readState.Index'  (duration: 175.727µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:40:19.922185Z","caller":"traceutil/trace.go:171","msg":"trace[2045468771] transaction","detail":"{read_only:false; response_revision:699; number_of_response:1; }","duration":"132.424708ms","start":"2024-04-16T17:40:19.789748Z","end":"2024-04-16T17:40:19.922173Z","steps":["trace[2045468771] 'process raft request'  (duration: 132.132005ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:40:19.92223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.031898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-tt8vp\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-16T17:40:19.92235Z","caller":"traceutil/trace.go:171","msg":"trace[704343818] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-tt8vp; range_end:; response_count:1; response_revision:699; }","duration":"108.165305ms","start":"2024-04-16T17:40:19.814177Z","end":"2024-04-16T17:40:19.922342Z","steps":["trace[704343818] 'agreement among raft nodes before linearized reading'  (duration: 107.973089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:41:10.445703Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.878793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:41:10.445867Z","caller":"traceutil/trace.go:171","msg":"trace[1670208070] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:741; }","duration":"122.091332ms","start":"2024-04-16T17:41:10.323746Z","end":"2024-04-16T17:41:10.445837Z","steps":["trace[1670208070] 'range keys from in-memory index tree'  (duration: 121.832847ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:41:10.446078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.45527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-tt8vp\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-16T17:41:10.44615Z","caller":"traceutil/trace.go:171","msg":"trace[386621786] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-tt8vp; range_end:; response_count:1; response_revision:741; }","duration":"134.56331ms","start":"2024-04-16T17:41:10.311575Z","end":"2024-04-16T17:41:10.446138Z","steps":["trace[386621786] 'range keys from in-memory index tree'  (duration: 134.362723ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:43:45.24379Z","caller":"traceutil/trace.go:171","msg":"trace[1131274137] transaction","detail":"{read_only:false; response_revision:868; number_of_response:1; }","duration":"129.350495ms","start":"2024-04-16T17:43:45.114386Z","end":"2024-04-16T17:43:45.243737Z","steps":["trace[1131274137] 'process raft request'  (duration: 128.959428ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:43:45.429638Z","caller":"traceutil/trace.go:171","msg":"trace[1808564091] linearizableReadLoop","detail":"{readStateIndex:977; appliedIndex:976; }","duration":"102.383922ms","start":"2024-04-16T17:43:45.327228Z","end":"2024-04-16T17:43:45.429612Z","steps":["trace[1808564091] 'read index received'  (duration: 40.979184ms)","trace[1808564091] 'applied index is now lower than readState.Index'  (duration: 61.404251ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:43:45.429895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.550026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:43:45.429984Z","caller":"traceutil/trace.go:171","msg":"trace[2052462729] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"176.160402ms","start":"2024-04-16T17:43:45.25381Z","end":"2024-04-16T17:43:45.42997Z","steps":["trace[2052462729] 'process raft request'  (duration: 114.484617ms)","trace[2052462729] 'compare'  (duration: 61.202434ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:43:45.429998Z","caller":"traceutil/trace.go:171","msg":"trace[604975077] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:869; }","duration":"102.779314ms","start":"2024-04-16T17:43:45.327202Z","end":"2024-04-16T17:43:45.429981Z","steps":["trace[604975077] 'agreement among raft nodes before linearized reading'  (duration: 102.5469ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:43:45.693498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.090257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:43:45.693866Z","caller":"traceutil/trace.go:171","msg":"trace[1017121556] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:869; }","duration":"142.543963ms","start":"2024-04-16T17:43:45.551286Z","end":"2024-04-16T17:43:45.69383Z","steps":["trace[1017121556] 'range keys from in-memory index tree'  (duration: 142.038365ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:43:47.504002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.09319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:43:47.504095Z","caller":"traceutil/trace.go:171","msg":"trace[463873739] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:870; }","duration":"180.217033ms","start":"2024-04-16T17:43:47.323861Z","end":"2024-04-16T17:43:47.504078Z","steps":["trace[463873739] 'range keys from in-memory index tree'  (duration: 180.041359ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:47:30.196979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":805}
	{"level":"info","ts":"2024-04-16T17:47:30.208309Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":805,"took":"10.324697ms","hash":3236236763,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-04-16T17:47:30.208401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3236236763,"revision":805,"compact-revision":-1}
	
	
	==> kernel <==
	 17:51:00 up 14 min,  0 users,  load average: 0.01, 0.07, 0.08
	Linux no-preload-368813 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7] <==
	I0416 17:45:32.533841       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:47:31.537214       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:47:31.537551       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 17:47:32.538303       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:47:32.538383       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:47:32.538390       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:47:32.538676       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:47:32.538820       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:47:32.540260       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:48:32.539504       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:48:32.539719       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:48:32.539747       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:48:32.540717       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:48:32.540747       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:48:32.540758       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:50:32.540576       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:50:32.540787       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:50:32.540801       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:50:32.540997       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:50:32.541158       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:50:32.542669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e] <==
	I0416 17:45:14.601297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:45:44.114982       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:45:44.609229       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:46:14.119285       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:46:14.617237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:46:44.128770       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:46:44.625772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:47:14.133965       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:47:14.633642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:47:44.138902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:47:44.644918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:48:14.144417       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:48:14.652576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 17:48:36.204142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="292.324µs"
	E0416 17:48:44.149854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:48:44.660334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 17:48:50.198631       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="94.303µs"
	E0416 17:49:14.155191       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:49:14.669923       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:49:44.165889       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:49:44.678602       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:50:14.172030       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:50:14.687408       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:50:44.178153       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:50:44.696364       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350] <==
	I0416 17:37:32.949040       1 server_linux.go:69] "Using iptables proxy"
	I0416 17:37:32.962648       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.33"]
	I0416 17:37:33.074609       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0416 17:37:33.074686       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:37:33.074704       1 server_linux.go:165] "Using iptables Proxier"
	I0416 17:37:33.080581       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:37:33.080982       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0416 17:37:33.081187       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:37:33.082192       1 config.go:192] "Starting service config controller"
	I0416 17:37:33.082316       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0416 17:37:33.082364       1 config.go:101] "Starting endpoint slice config controller"
	I0416 17:37:33.082382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0416 17:37:33.082855       1 config.go:319] "Starting node config controller"
	I0416 17:37:33.083692       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0416 17:37:33.183668       1 shared_informer.go:320] Caches are synced for service config
	I0416 17:37:33.186075       1 shared_informer.go:320] Caches are synced for node config
	I0416 17:37:33.187702       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68] <==
	I0416 17:37:31.496079       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 17:37:31.496326       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:37:31.498605       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:37:31.496423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0416 17:37:31.513684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:37:31.513740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:37:31.513832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 17:37:31.513873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 17:37:31.513928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:37:31.513937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:37:31.513967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:37:31.513974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:37:31.524819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:37:31.524873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:37:31.528873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:37:31.528929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:37:31.528992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:37:31.529031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:37:31.529183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0416 17:37:31.529195       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0416 17:37:31.529223       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:37:31.529260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:37:31.529337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 17:37:31.529346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0416 17:37:31.599339       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:48:27 no-preload-368813 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:48:27 no-preload-368813 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:48:27 no-preload-368813 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:48:27 no-preload-368813 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:48:36 no-preload-368813 kubelet[1359]: E0416 17:48:36.184867    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:48:50 no-preload-368813 kubelet[1359]: E0416 17:48:50.184418    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:49:01 no-preload-368813 kubelet[1359]: E0416 17:49:01.185710    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:49:14 no-preload-368813 kubelet[1359]: E0416 17:49:14.184356    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:49:27 no-preload-368813 kubelet[1359]: E0416 17:49:27.211734    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 17:49:27 no-preload-368813 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:49:27 no-preload-368813 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:49:27 no-preload-368813 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:49:27 no-preload-368813 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:49:29 no-preload-368813 kubelet[1359]: E0416 17:49:29.185358    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:49:44 no-preload-368813 kubelet[1359]: E0416 17:49:44.185015    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:49:59 no-preload-368813 kubelet[1359]: E0416 17:49:59.184489    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:50:13 no-preload-368813 kubelet[1359]: E0416 17:50:13.185052    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:50:26 no-preload-368813 kubelet[1359]: E0416 17:50:26.184417    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:50:27 no-preload-368813 kubelet[1359]: E0416 17:50:27.212475    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 17:50:27 no-preload-368813 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:50:27 no-preload-368813 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:50:27 no-preload-368813 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:50:27 no-preload-368813 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:50:41 no-preload-368813 kubelet[1359]: E0416 17:50:41.184874    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:50:56 no-preload-368813 kubelet[1359]: E0416 17:50:56.184795    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	
	
	==> storage-provisioner [4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be] <==
	I0416 17:37:32.740375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0416 17:38:02.745240       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00] <==
	I0416 17:38:03.505820       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 17:38:03.516196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 17:38:03.516317       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 17:38:20.920325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 17:38:20.921030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4520299d-bd38-406b-a78e-d4bd85587366", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-368813_5eb124a9-7fef-465b-b148-bd6050ca785a became leader
	I0416 17:38:20.921297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-368813_5eb124a9-7fef-465b-b148-bd6050ca785a!
	I0416 17:38:21.022104       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-368813_5eb124a9-7fef-465b-b148-bd6050ca785a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368813 -n no-preload-368813
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-368813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-tt8vp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-368813 describe pod metrics-server-569cc877fc-tt8vp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-368813 describe pod metrics-server-569cc877fc-tt8vp: exit status 1 (61.131269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-tt8vp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-368813 describe pod metrics-server-569cc877fc-tt8vp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-970622 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-970622 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.90966323s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-970622] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-970622" primary control-plane node in "pause-970622" cluster
	* Updating the running kvm2 "pause-970622" VM ...
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-970622" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:42:16.064687   57766 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:42:16.064805   57766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:42:16.064816   57766 out.go:304] Setting ErrFile to fd 2...
	I0416 17:42:16.064823   57766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:42:16.065109   57766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:42:16.065733   57766 out.go:298] Setting JSON to false
	I0416 17:42:16.066729   57766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5088,"bootTime":1713284248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:42:16.066810   57766 start.go:139] virtualization: kvm guest
	I0416 17:42:16.069699   57766 out.go:177] * [pause-970622] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:42:16.070884   57766 notify.go:220] Checking for updates...
	I0416 17:42:16.070891   57766 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:42:16.072082   57766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:42:16.073252   57766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:42:16.074328   57766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:42:16.075482   57766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:42:16.076590   57766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:42:16.078195   57766 config.go:182] Loaded profile config "pause-970622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:42:16.078688   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.078748   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.094555   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I0416 17:42:16.095031   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.095569   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.095589   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.095917   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.096090   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.096345   57766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:42:16.096603   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.096634   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.110721   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0416 17:42:16.111174   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.111596   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.111615   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.111978   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.112175   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.147098   57766 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:42:16.148452   57766 start.go:297] selected driver: kvm2
	I0416 17:42:16.148467   57766 start.go:901] validating driver "kvm2" against &{Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:16.148605   57766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:42:16.149020   57766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:42:16.149113   57766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:42:16.163353   57766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:42:16.164252   57766 cni.go:84] Creating CNI manager for ""
	I0416 17:42:16.164275   57766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:42:16.164353   57766 start.go:340] cluster config:
	{Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:16.164557   57766 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:42:16.166182   57766 out.go:177] * Starting "pause-970622" primary control-plane node in "pause-970622" cluster
	I0416 17:42:16.167463   57766 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:42:16.167506   57766 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:42:16.167515   57766 cache.go:56] Caching tarball of preloaded images
	I0416 17:42:16.167594   57766 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:42:16.167608   57766 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:42:16.167713   57766 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/config.json ...
	I0416 17:42:16.167890   57766 start.go:360] acquireMachinesLock for pause-970622: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:42:16.167928   57766 start.go:364] duration metric: took 21.543µs to acquireMachinesLock for "pause-970622"
	I0416 17:42:16.167938   57766 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:42:16.167947   57766 fix.go:54] fixHost starting: 
	I0416 17:42:16.168253   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.168294   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.182553   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0416 17:42:16.182912   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.183330   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.183348   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.183646   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.183853   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.183970   57766 main.go:141] libmachine: (pause-970622) Calling .GetState
	I0416 17:42:16.185520   57766 fix.go:112] recreateIfNeeded on pause-970622: state=Running err=<nil>
	W0416 17:42:16.185539   57766 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:42:16.187252   57766 out.go:177] * Updating the running kvm2 "pause-970622" VM ...
	I0416 17:42:16.188463   57766 machine.go:94] provisionDockerMachine start ...
	I0416 17:42:16.188508   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.188695   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.191471   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.191856   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.191882   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.192009   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.192188   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.192328   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.192477   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.192584   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.192761   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.192771   57766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:42:16.298084   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-970622
	
	I0416 17:42:16.298126   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.298482   57766 buildroot.go:166] provisioning hostname "pause-970622"
	I0416 17:42:16.298513   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.298725   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.301325   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.301725   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.301759   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.301930   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.302132   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.302305   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.302493   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.302695   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.302894   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.302910   57766 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-970622 && echo "pause-970622" | sudo tee /etc/hostname
	I0416 17:42:16.426028   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-970622
	
	I0416 17:42:16.426063   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.429199   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.429600   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.429639   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.429832   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.430056   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.430212   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.430379   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.430592   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.430809   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.430828   57766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-970622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-970622/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-970622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:42:16.538293   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:42:16.538325   57766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:42:16.538374   57766 buildroot.go:174] setting up certificates
	I0416 17:42:16.538384   57766 provision.go:84] configureAuth start
	I0416 17:42:16.538398   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.538717   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:16.541494   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.541839   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.541880   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.542039   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.544408   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.544730   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.544756   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.544962   57766 provision.go:143] copyHostCerts
	I0416 17:42:16.545018   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:42:16.545041   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:42:16.545125   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:42:16.545263   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:42:16.545276   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:42:16.545326   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:42:16.545413   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:42:16.545423   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:42:16.545457   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:42:16.545537   57766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.pause-970622 san=[127.0.0.1 192.168.39.176 localhost minikube pause-970622]
	I0416 17:42:16.585110   57766 provision.go:177] copyRemoteCerts
	I0416 17:42:16.585181   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:42:16.585204   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.588049   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.588468   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.588501   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.588700   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.588901   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.589127   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.589304   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:16.674614   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:42:16.707537   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0416 17:42:16.740926   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:42:16.772124   57766 provision.go:87] duration metric: took 233.730002ms to configureAuth
	I0416 17:42:16.772154   57766 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:42:16.772406   57766 config.go:182] Loaded profile config "pause-970622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:42:16.772510   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.775240   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.775552   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.775601   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.775789   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.775957   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.776163   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.776308   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.776468   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.776631   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.776653   57766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:42:22.391526   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:42:22.391554   57766 machine.go:97] duration metric: took 6.203075471s to provisionDockerMachine
	I0416 17:42:22.391568   57766 start.go:293] postStartSetup for "pause-970622" (driver="kvm2")
	I0416 17:42:22.391581   57766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:42:22.391610   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.391947   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:42:22.391971   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.394425   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.394768   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.394798   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.394917   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.395088   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.395244   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.395368   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.477278   57766 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:42:22.482108   57766 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:42:22.482129   57766 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:42:22.482208   57766 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:42:22.482311   57766 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:42:22.482435   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:42:22.493526   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:42:22.520957   57766 start.go:296] duration metric: took 129.359894ms for postStartSetup
	I0416 17:42:22.520998   57766 fix.go:56] duration metric: took 6.353054008s for fixHost
	I0416 17:42:22.521022   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.523922   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.524251   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.524280   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.524423   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.524641   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.524917   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.525056   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.525243   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:22.525511   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:22.525532   57766 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 17:42:22.629958   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289342.618991630
	
	I0416 17:42:22.629990   57766 fix.go:216] guest clock: 1713289342.618991630
	I0416 17:42:22.630000   57766 fix.go:229] Guest: 2024-04-16 17:42:22.61899163 +0000 UTC Remote: 2024-04-16 17:42:22.521003217 +0000 UTC m=+6.507572872 (delta=97.988413ms)
	I0416 17:42:22.630056   57766 fix.go:200] guest clock delta is within tolerance: 97.988413ms
	I0416 17:42:22.630064   57766 start.go:83] releasing machines lock for "pause-970622", held for 6.462129483s
	I0416 17:42:22.630096   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.630360   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:22.633198   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.633601   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.633630   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.633830   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634328   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634497   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634572   57766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:42:22.634613   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.634698   57766 ssh_runner.go:195] Run: cat /version.json
	I0416 17:42:22.634723   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.637123   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637450   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637481   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.637500   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637659   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.637841   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.637895   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.637948   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.638020   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.638088   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.638225   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.638243   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.638398   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.638566   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.745845   57766 ssh_runner.go:195] Run: systemctl --version
	I0416 17:42:22.753300   57766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:42:22.911391   57766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:42:22.918725   57766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:42:22.918783   57766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:42:22.928742   57766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 17:42:22.928762   57766 start.go:494] detecting cgroup driver to use...
	I0416 17:42:22.928829   57766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:42:22.946142   57766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:42:22.962395   57766 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:42:22.962444   57766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:42:22.977464   57766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:42:22.991813   57766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:42:23.124475   57766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:42:23.263793   57766 docker.go:233] disabling docker service ...
	I0416 17:42:23.263882   57766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:42:23.285179   57766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:42:23.301794   57766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:42:23.437427   57766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:42:23.564625   57766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:42:23.579837   57766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:42:23.603432   57766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:42:23.603503   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.615464   57766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:42:23.615531   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.627243   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.638519   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.649491   57766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:42:23.660860   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.672379   57766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.684949   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.696353   57766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:42:23.706612   57766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:42:23.717299   57766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:42:23.863740   57766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:42:29.929060   57766 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.065271729s)
	I0416 17:42:29.929092   57766 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:42:29.929157   57766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:42:29.935007   57766 start.go:562] Will wait 60s for crictl version
	I0416 17:42:29.935058   57766 ssh_runner.go:195] Run: which crictl
	I0416 17:42:29.939727   57766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:42:29.990234   57766 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:42:29.990344   57766 ssh_runner.go:195] Run: crio --version
	I0416 17:42:30.031923   57766 ssh_runner.go:195] Run: crio --version
	I0416 17:42:30.073505   57766 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:42:30.074763   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:30.077893   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:30.078312   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:30.078335   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:30.078591   57766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:42:30.083804   57766 kubeadm.go:877] updating cluster {Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:42:30.083933   57766 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:42:30.083973   57766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:42:30.139181   57766 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:42:30.139202   57766 crio.go:433] Images already preloaded, skipping extraction
	I0416 17:42:30.139251   57766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:42:30.176379   57766 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:42:30.176402   57766 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:42:30.176410   57766 kubeadm.go:928] updating node { 192.168.39.176 8443 v1.29.3 crio true true} ...
	I0416 17:42:30.176507   57766 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-970622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:42:30.176586   57766 ssh_runner.go:195] Run: crio config
	I0416 17:42:30.235761   57766 cni.go:84] Creating CNI manager for ""
	I0416 17:42:30.235787   57766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:42:30.235805   57766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:42:30.235838   57766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-970622 NodeName:pause-970622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:42:30.235999   57766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-970622"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:42:30.236077   57766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:42:30.248729   57766 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:42:30.248805   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:42:30.259932   57766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0416 17:42:30.279237   57766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:42:30.297756   57766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0416 17:42:30.316328   57766 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I0416 17:42:30.320886   57766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:42:30.458429   57766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:42:30.476656   57766 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622 for IP: 192.168.39.176
	I0416 17:42:30.476687   57766 certs.go:194] generating shared ca certs ...
	I0416 17:42:30.476704   57766 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:42:30.476873   57766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:42:30.476922   57766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:42:30.476936   57766 certs.go:256] generating profile certs ...
	I0416 17:42:30.477038   57766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/client.key
	I0416 17:42:30.477122   57766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.key.017177e3
	I0416 17:42:30.477208   57766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.key
	I0416 17:42:30.477345   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:42:30.477383   57766 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:42:30.477397   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:42:30.477437   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:42:30.477469   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:42:30.477511   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:42:30.477570   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:42:30.478318   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:42:30.506798   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:42:30.535507   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:42:30.565525   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:42:30.595303   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0416 17:42:30.625867   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 17:42:30.658552   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:42:30.687664   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:42:30.714776   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:42:30.800977   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:42:30.893218   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:42:31.107913   57766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:42:31.143884   57766 ssh_runner.go:195] Run: openssl version
	I0416 17:42:31.188299   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:42:31.361460   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.395338   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.395397   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.456853   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:42:31.558276   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:42:31.618067   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.629106   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.629170   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.641644   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:42:31.683687   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:42:31.715784   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.726103   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.726158   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.805323   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:42:31.837420   57766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:42:31.849378   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:42:31.864850   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:42:31.889616   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:42:31.932682   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:42:31.948249   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:42:31.963764   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:42:31.975150   57766 kubeadm.go:391] StartCluster: {Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:31.975263   57766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:42:31.975307   57766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:42:32.085675   57766 cri.go:89] found id: "bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b"
	I0416 17:42:32.085703   57766 cri.go:89] found id: "7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1"
	I0416 17:42:32.085714   57766 cri.go:89] found id: "f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3"
	I0416 17:42:32.085719   57766 cri.go:89] found id: "bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc"
	I0416 17:42:32.085723   57766 cri.go:89] found id: "7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d"
	I0416 17:42:32.085737   57766 cri.go:89] found id: "9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447"
	I0416 17:42:32.085741   57766 cri.go:89] found id: "42d0967f68b2840f48e454f17063550bc595ee48a3129e743331163fb511fadb"
	I0416 17:42:32.085745   57766 cri.go:89] found id: "a7aca58c6cf26bf99d8c2e3e79dbb19b6626cebe102517cd978ba6cec252a6b0"
	I0416 17:42:32.085752   57766 cri.go:89] found id: "d365abb1c89be710e9b03f2ed845bcbf9cccca66c03853bbbdef1a0381987a52"
	I0416 17:42:32.085760   57766 cri.go:89] found id: "9c28d5fcbca20ac35f97ddd4dc7be237a460e6ae62f71c6bc3ae1dff833832c4"
	I0416 17:42:32.085768   57766 cri.go:89] found id: ""
	I0416 17:42:32.085819   57766 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-970622 -n pause-970622
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-970622 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-970622 logs -n 25: (1.501268141s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-512869            | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC | 16 Apr 24 17:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-795352             | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC | 16 Apr 24 17:38 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:38 UTC | 16 Apr 24 17:38 UTC |
	| start   | -p stopped-upgrade-446675                              | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:38 UTC | 16 Apr 24 17:39 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	| stop    | stopped-upgrade-446675 stop                            | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:39 UTC | 16 Apr 24 17:39 UTC |
	| start   | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:39 UTC | 16 Apr 24 17:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:40 UTC |
	| start   | -p pause-970622 --memory=2048                          | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:42 UTC |
	|         | --install-addons=false                                 |                              |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:42 UTC | 16 Apr 24 17:43 UTC |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:42:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:42:16.064687   57766 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:42:16.064805   57766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:42:16.064816   57766 out.go:304] Setting ErrFile to fd 2...
	I0416 17:42:16.064823   57766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:42:16.065109   57766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:42:16.065733   57766 out.go:298] Setting JSON to false
	I0416 17:42:16.066729   57766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5088,"bootTime":1713284248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:42:16.066810   57766 start.go:139] virtualization: kvm guest
	I0416 17:42:16.069699   57766 out.go:177] * [pause-970622] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:42:16.070884   57766 notify.go:220] Checking for updates...
	I0416 17:42:16.070891   57766 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:42:16.072082   57766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:42:16.073252   57766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:42:16.074328   57766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:42:16.075482   57766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:42:16.076590   57766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:42:16.078195   57766 config.go:182] Loaded profile config "pause-970622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:42:16.078688   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.078748   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.094555   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I0416 17:42:16.095031   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.095569   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.095589   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.095917   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.096090   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.096345   57766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:42:16.096603   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.096634   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.110721   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0416 17:42:16.111174   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.111596   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.111615   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.111978   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.112175   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.147098   57766 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:42:16.148452   57766 start.go:297] selected driver: kvm2
	I0416 17:42:16.148467   57766 start.go:901] validating driver "kvm2" against &{Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:16.148605   57766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:42:16.149020   57766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:42:16.149113   57766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:42:16.163353   57766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:42:16.164252   57766 cni.go:84] Creating CNI manager for ""
	I0416 17:42:16.164275   57766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:42:16.164353   57766 start.go:340] cluster config:
	{Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:16.164557   57766 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:42:16.166182   57766 out.go:177] * Starting "pause-970622" primary control-plane node in "pause-970622" cluster
	I0416 17:42:16.167463   57766 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:42:16.167506   57766 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:42:16.167515   57766 cache.go:56] Caching tarball of preloaded images
	I0416 17:42:16.167594   57766 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:42:16.167608   57766 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:42:16.167713   57766 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/config.json ...
	I0416 17:42:16.167890   57766 start.go:360] acquireMachinesLock for pause-970622: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:42:16.167928   57766 start.go:364] duration metric: took 21.543µs to acquireMachinesLock for "pause-970622"
	I0416 17:42:16.167938   57766 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:42:16.167947   57766 fix.go:54] fixHost starting: 
	I0416 17:42:16.168253   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.168294   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.182553   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0416 17:42:16.182912   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.183330   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.183348   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.183646   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.183853   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.183970   57766 main.go:141] libmachine: (pause-970622) Calling .GetState
	I0416 17:42:16.185520   57766 fix.go:112] recreateIfNeeded on pause-970622: state=Running err=<nil>
	W0416 17:42:16.185539   57766 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:42:16.187252   57766 out.go:177] * Updating the running kvm2 "pause-970622" VM ...
	I0416 17:42:16.188463   57766 machine.go:94] provisionDockerMachine start ...
	I0416 17:42:16.188508   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.188695   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.191471   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.191856   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.191882   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.192009   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.192188   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.192328   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.192477   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.192584   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.192761   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.192771   57766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:42:16.298084   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-970622
	
	I0416 17:42:16.298126   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.298482   57766 buildroot.go:166] provisioning hostname "pause-970622"
	I0416 17:42:16.298513   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.298725   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.301325   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.301725   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.301759   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.301930   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.302132   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.302305   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.302493   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.302695   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.302894   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.302910   57766 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-970622 && echo "pause-970622" | sudo tee /etc/hostname
	I0416 17:42:16.426028   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-970622
	
	I0416 17:42:16.426063   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.429199   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.429600   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.429639   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.429832   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.430056   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.430212   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.430379   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.430592   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.430809   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.430828   57766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-970622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-970622/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-970622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:42:16.538293   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:42:16.538325   57766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:42:16.538374   57766 buildroot.go:174] setting up certificates
	I0416 17:42:16.538384   57766 provision.go:84] configureAuth start
	I0416 17:42:16.538398   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.538717   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:16.541494   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.541839   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.541880   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.542039   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.544408   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.544730   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.544756   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.544962   57766 provision.go:143] copyHostCerts
	I0416 17:42:16.545018   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:42:16.545041   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:42:16.545125   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:42:16.545263   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:42:16.545276   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:42:16.545326   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:42:16.545413   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:42:16.545423   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:42:16.545457   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:42:16.545537   57766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.pause-970622 san=[127.0.0.1 192.168.39.176 localhost minikube pause-970622]
	I0416 17:42:16.585110   57766 provision.go:177] copyRemoteCerts
	I0416 17:42:16.585181   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:42:16.585204   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.588049   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.588468   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.588501   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.588700   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.588901   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.589127   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.589304   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:16.674614   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:42:16.707537   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0416 17:42:16.740926   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:42:16.772124   57766 provision.go:87] duration metric: took 233.730002ms to configureAuth
	I0416 17:42:16.772154   57766 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:42:16.772406   57766 config.go:182] Loaded profile config "pause-970622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:42:16.772510   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.775240   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.775552   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.775601   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.775789   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.775957   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.776163   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.776308   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.776468   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.776631   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.776653   57766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:42:22.391526   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:42:22.391554   57766 machine.go:97] duration metric: took 6.203075471s to provisionDockerMachine
	I0416 17:42:22.391568   57766 start.go:293] postStartSetup for "pause-970622" (driver="kvm2")
	I0416 17:42:22.391581   57766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:42:22.391610   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.391947   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:42:22.391971   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.394425   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.394768   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.394798   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.394917   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.395088   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.395244   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.395368   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.477278   57766 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:42:22.482108   57766 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:42:22.482129   57766 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:42:22.482208   57766 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:42:22.482311   57766 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:42:22.482435   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:42:22.493526   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:42:22.520957   57766 start.go:296] duration metric: took 129.359894ms for postStartSetup
	I0416 17:42:22.520998   57766 fix.go:56] duration metric: took 6.353054008s for fixHost
	I0416 17:42:22.521022   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.523922   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.524251   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.524280   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.524423   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.524641   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.524917   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.525056   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.525243   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:22.525511   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:22.525532   57766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:42:22.629958   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289342.618991630
	
	I0416 17:42:22.629990   57766 fix.go:216] guest clock: 1713289342.618991630
	I0416 17:42:22.630000   57766 fix.go:229] Guest: 2024-04-16 17:42:22.61899163 +0000 UTC Remote: 2024-04-16 17:42:22.521003217 +0000 UTC m=+6.507572872 (delta=97.988413ms)
	I0416 17:42:22.630056   57766 fix.go:200] guest clock delta is within tolerance: 97.988413ms
	I0416 17:42:22.630064   57766 start.go:83] releasing machines lock for "pause-970622", held for 6.462129483s
	I0416 17:42:22.630096   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.630360   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:22.633198   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.633601   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.633630   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.633830   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634328   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634497   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634572   57766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:42:22.634613   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.634698   57766 ssh_runner.go:195] Run: cat /version.json
	I0416 17:42:22.634723   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.637123   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637450   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637481   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.637500   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637659   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.637841   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.637895   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.637948   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.638020   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.638088   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.638225   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.638243   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.638398   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.638566   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.745845   57766 ssh_runner.go:195] Run: systemctl --version
	I0416 17:42:22.753300   57766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:42:22.911391   57766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:42:22.918725   57766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:42:22.918783   57766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:42:22.928742   57766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 17:42:22.928762   57766 start.go:494] detecting cgroup driver to use...
	I0416 17:42:22.928829   57766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:42:22.946142   57766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:42:22.962395   57766 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:42:22.962444   57766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:42:22.977464   57766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:42:22.991813   57766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:42:23.124475   57766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:42:23.263793   57766 docker.go:233] disabling docker service ...
	I0416 17:42:23.263882   57766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:42:23.285179   57766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:42:23.301794   57766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:42:23.437427   57766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:42:23.564625   57766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:42:23.579837   57766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:42:23.603432   57766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:42:23.603503   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.615464   57766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:42:23.615531   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.627243   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.638519   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.649491   57766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:42:23.660860   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.672379   57766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.684949   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.696353   57766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:42:23.706612   57766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:42:23.717299   57766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:42:23.863740   57766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:42:29.929060   57766 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.065271729s)
	I0416 17:42:29.929092   57766 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:42:29.929157   57766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:42:29.935007   57766 start.go:562] Will wait 60s for crictl version
	I0416 17:42:29.935058   57766 ssh_runner.go:195] Run: which crictl
	I0416 17:42:29.939727   57766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:42:29.990234   57766 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:42:29.990344   57766 ssh_runner.go:195] Run: crio --version
	I0416 17:42:30.031923   57766 ssh_runner.go:195] Run: crio --version
	I0416 17:42:30.073505   57766 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:42:30.074763   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:30.077893   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:30.078312   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:30.078335   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:30.078591   57766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:42:30.083804   57766 kubeadm.go:877] updating cluster {Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:42:30.083933   57766 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:42:30.083973   57766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:42:30.139181   57766 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:42:30.139202   57766 crio.go:433] Images already preloaded, skipping extraction
	I0416 17:42:30.139251   57766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:42:30.176379   57766 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:42:30.176402   57766 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:42:30.176410   57766 kubeadm.go:928] updating node { 192.168.39.176 8443 v1.29.3 crio true true} ...
	I0416 17:42:30.176507   57766 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-970622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:42:30.176586   57766 ssh_runner.go:195] Run: crio config
	I0416 17:42:30.235761   57766 cni.go:84] Creating CNI manager for ""
	I0416 17:42:30.235787   57766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:42:30.235805   57766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:42:30.235838   57766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-970622 NodeName:pause-970622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:42:30.235999   57766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-970622"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:42:30.236077   57766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:42:30.248729   57766 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:42:30.248805   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:42:30.259932   57766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0416 17:42:30.279237   57766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:42:30.297756   57766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0416 17:42:30.316328   57766 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I0416 17:42:30.320886   57766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:42:30.458429   57766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:42:30.476656   57766 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622 for IP: 192.168.39.176
	I0416 17:42:30.476687   57766 certs.go:194] generating shared ca certs ...
	I0416 17:42:30.476704   57766 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:42:30.476873   57766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:42:30.476922   57766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:42:30.476936   57766 certs.go:256] generating profile certs ...
	I0416 17:42:30.477038   57766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/client.key
	I0416 17:42:30.477122   57766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.key.017177e3
	I0416 17:42:30.477208   57766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.key
	I0416 17:42:30.477345   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:42:30.477383   57766 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:42:30.477397   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:42:30.477437   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:42:30.477469   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:42:30.477511   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:42:30.477570   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:42:30.478318   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:42:30.506798   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:42:30.535507   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:42:30.565525   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:42:30.595303   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0416 17:42:30.625867   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 17:42:30.658552   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:42:30.687664   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:42:30.714776   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:42:30.800977   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:42:30.893218   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:42:31.107913   57766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:42:31.143884   57766 ssh_runner.go:195] Run: openssl version
	I0416 17:42:31.188299   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:42:31.361460   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.395338   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.395397   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.456853   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:42:31.558276   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:42:31.618067   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.629106   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.629170   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.641644   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:42:31.683687   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:42:31.715784   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.726103   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.726158   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.805323   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:42:31.837420   57766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:42:31.849378   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:42:31.864850   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:42:31.889616   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:42:31.932682   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:42:31.948249   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:42:31.963764   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:42:31.975150   57766 kubeadm.go:391] StartCluster: {Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:31.975263   57766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:42:31.975307   57766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:42:32.085675   57766 cri.go:89] found id: "bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b"
	I0416 17:42:32.085703   57766 cri.go:89] found id: "7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1"
	I0416 17:42:32.085714   57766 cri.go:89] found id: "f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3"
	I0416 17:42:32.085719   57766 cri.go:89] found id: "bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc"
	I0416 17:42:32.085723   57766 cri.go:89] found id: "7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d"
	I0416 17:42:32.085737   57766 cri.go:89] found id: "9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447"
	I0416 17:42:32.085741   57766 cri.go:89] found id: "42d0967f68b2840f48e454f17063550bc595ee48a3129e743331163fb511fadb"
	I0416 17:42:32.085745   57766 cri.go:89] found id: "a7aca58c6cf26bf99d8c2e3e79dbb19b6626cebe102517cd978ba6cec252a6b0"
	I0416 17:42:32.085752   57766 cri.go:89] found id: "d365abb1c89be710e9b03f2ed845bcbf9cccca66c03853bbbdef1a0381987a52"
	I0416 17:42:32.085760   57766 cri.go:89] found id: "9c28d5fcbca20ac35f97ddd4dc7be237a460e6ae62f71c6bc3ae1dff833832c4"
	I0416 17:42:32.085768   57766 cri.go:89] found id: ""
	I0416 17:42:32.085819   57766 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.774738442Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e817d216-7ed6-4536-97c4-22894d10d674 name=/runtime.v1.RuntimeService/Status
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.781113702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9595965-a97a-4f33-b1db-3cb451dc50b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.781213902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9595965-a97a-4f33-b1db-3cb451dc50b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.781552574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289366123803663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289366135777600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289366161721741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289366158815362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,PodSandboxId:9a2044d33e9477f5468ef7619ecfaa96da2f5cc06ceb099ea113b7a257c21c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289352164297558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,PodSandboxId:a85bea2f87b5acb76e2007f68f4758dc85eedbcd7fc0c0da9a0ec33f6fb8d26c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713289351501957598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io
.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713289351314241599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289351322872699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289351240517201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713289351079821561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d,PodSandboxId:4dc0eedaba0edbe8beba9af3390fa57b3c3e68a47b01be442f8d5b27180eaec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713289294797282553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447,PodSandboxId:c9a75b9c6519646a99cb7856bd465b4ebb3830f4f466cc2f2dbf6d02fc329f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713289294357318585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9595965-a97a-4f33-b1db-3cb451dc50b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.782315045Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0ec474a8-1308-4587-bebb-ea0b69b6ce65 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.782650091Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713289366323211762,StartedAt:1713289366533920565,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f54968e864aab0556b9ac05d7eb288db/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f54968e864aab0556b9ac05d7eb288db/containers/etcd/9b08de3e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-pause-970
622_f54968e864aab0556b9ac05d7eb288db/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0ec474a8-1308-4587-bebb-ea0b69b6ce65 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.783206313Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=4dad5a49-a2a3-4f96-b0da-d0d439b83a7d name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.783538711Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713289366313487722,StartedAt:1713289366443681080,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/91e3f824c3bfcc8c5f3c22df6d2732a4/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/91e3f824c3bfcc8c5f3c22df6d2732a4/containers/kube-scheduler/46eec20d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-pause-970622_91e3f824c3bfcc8c5f3c22df6d2732a4/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,Cp
uQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4dad5a49-a2a3-4f96-b0da-d0d439b83a7d name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.784104971Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=183322d1-919a-45d9-a67b-3c601a1b8fc8 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.784283739Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713289366276971282,StartedAt:1713289366393192283,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/360abb604b7c06c559ec13110b94d6e3/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/360abb604b7c06c559ec13110b94d6e3/containers/kube-controller-manager/420e903e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappi
ngs:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-pause-970622_360abb604b7c06c559ec13110b94d6e3/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageL
imits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=183322d1-919a-45d9-a67b-3c601a1b8fc8 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.784938944Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=af1f4c93-d886-4b34-bf1d-4fc2ce5190a9 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.785036752Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713289366251777284,StartedAt:1713289366349726417,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e6ef930f623848dab209c9e1b14b0548/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e6ef930f623848dab209c9e1b14b0548/containers/kube-apiserver/e9f76587,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/v
ar/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-pause-970622_e6ef930f623848dab209c9e1b14b0548/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=af1f4c93-d886-4b34-bf1d-4fc2ce5190a9 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.785512881Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ffcf2d98-2d24-4e14-a8f3-a812c96821f0 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.785637746Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713289352396997039,StartedAt:1713289352433033408,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/c012b947-0bb8-47c8-aff6-fb19c9af0145/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c012b947-0bb8-47c8-aff6-fb19c9af0145/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c012b947-0bb8-47c8-aff6-fb19c9af0145/containers/coredns/bbecdccc,Readonly:false,SelinuxRelabel:false,Propagation:PRO
PAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/c012b947-0bb8-47c8-aff6-fb19c9af0145/volumes/kubernetes.io~projected/kube-api-access-jkst4,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-76f75df574-ddmc8_c012b947-0bb8-47c8-aff6-fb19c9af0145/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:965,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ffcf2d98-2d24-4e14-a8f3-a812c96821f0 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.786125097Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1e3c7119-6853-4bdc-83de-682cf5dda46d name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.786257182Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1713289352385321405,StartedAt:1713289352416267955,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.29.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/474c9f71-8089-4f36-b37c-9fb0639804c3/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/474c9f71-8089-4f36-b37c-9fb0639804c3/containers/kube-proxy/143b47a3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/
kubelet/pods/474c9f71-8089-4f36-b37c-9fb0639804c3/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/474c9f71-8089-4f36-b37c-9fb0639804c3/volumes/kubernetes.io~projected/kube-api-access-dp5hj,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-9k8tn_474c9f71-8089-4f36-b37c-9fb0639804c3/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-colle
ctor/interceptors.go:74" id=1e3c7119-6853-4bdc-83de-682cf5dda46d name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.792655549Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=23ef9e82-6096-4d8c-a6d5-51808069f6e6 name=/runtime.v1.ImageService/ListImages
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.793474321Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,RepoTags:[registry.k8s.io/kube-apiserver:v1.29.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322 registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c],Size_:128508878,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,RepoTags:[registry.k8s.io/kube-controller-manager:v1.29.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606 registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104],Size_:123142962,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Ima
ge{Id:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,RepoTags:[registry.k8s.io/kube-scheduler:v1.29.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88],Size_:60724018,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,RepoTags:[registry.k8s.io/kube-proxy:v1.29.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863],Size_:83634073,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de53
0d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,RepoTags:[docker.io/kindest/kindnetd:v20240202-8f1494ea],RepoDigests:[docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988 docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac],Size_:65291810,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=23ef9e82-6096-4d8c-a6d5-51808069f6e6 name=/runtime.v1.ImageService/ListImages
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.807612440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d92a2de7-b68f-4f07-8c38-b8600a1e4990 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.807675166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d92a2de7-b68f-4f07-8c38-b8600a1e4990 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.809504087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54c20177-f46f-4bcc-a8f1-d856fe040454 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.809850861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289385809832344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54c20177-f46f-4bcc-a8f1-d856fe040454 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.810778292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7844817b-2201-4536-895b-8de6e2ee3172 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.810827387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7844817b-2201-4536-895b-8de6e2ee3172 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:05 pause-970622 crio[2469]: time="2024-04-16 17:43:05.811094705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289366123803663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289366135777600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289366161721741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289366158815362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,PodSandboxId:9a2044d33e9477f5468ef7619ecfaa96da2f5cc06ceb099ea113b7a257c21c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289352164297558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,PodSandboxId:a85bea2f87b5acb76e2007f68f4758dc85eedbcd7fc0c0da9a0ec33f6fb8d26c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713289351501957598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io
.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713289351314241599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289351322872699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289351240517201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713289351079821561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d,PodSandboxId:4dc0eedaba0edbe8beba9af3390fa57b3c3e68a47b01be442f8d5b27180eaec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713289294797282553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447,PodSandboxId:c9a75b9c6519646a99cb7856bd465b4ebb3830f4f466cc2f2dbf6d02fc329f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713289294357318585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7844817b-2201-4536-895b-8de6e2ee3172 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2e62264fb71e5       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   19 seconds ago       Running             kube-controller-manager   2                   ec0d113699e3f       kube-controller-manager-pause-970622
	db002313ebce2       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   19 seconds ago       Running             kube-apiserver            2                   7c1a756460c1a       kube-apiserver-pause-970622
	3bc99a9165aa6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   19 seconds ago       Running             kube-scheduler            2                   8776bf30dc24a       kube-scheduler-pause-970622
	04e0dddad44cd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   19 seconds ago       Running             etcd                      2                   9f867bb17f7f2       etcd-pause-970622
	f88157dbb4291       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   33 seconds ago       Running             coredns                   1                   9a2044d33e947       coredns-76f75df574-ddmc8
	3d9b87bb8ec76       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   34 seconds ago       Running             kube-proxy                1                   a85bea2f87b5a       kube-proxy-9k8tn
	bfcc4acb5fcda       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   34 seconds ago       Exited              etcd                      1                   9f867bb17f7f2       etcd-pause-970622
	7f9eb39bd3e31       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   34 seconds ago       Exited              kube-controller-manager   1                   ec0d113699e3f       kube-controller-manager-pause-970622
	f498c505f3d02       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   34 seconds ago       Exited              kube-apiserver            1                   7c1a756460c1a       kube-apiserver-pause-970622
	bf820d6141334       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   34 seconds ago       Exited              kube-scheduler            1                   8776bf30dc24a       kube-scheduler-pause-970622
	7f1e6f554ece7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   4dc0eedaba0ed       coredns-76f75df574-ddmc8
	9d688fdfab050       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   About a minute ago   Exited              kube-proxy                0                   c9a75b9c65196       kube-proxy-9k8tn
	
	
	==> coredns [7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49933 - 52742 "HINFO IN 6829272663999387613.5345536729571950507. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008738678s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1713941778]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:41:35.071) (total time: 30002ms):
	Trace[1713941778]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:42:05.073)
	Trace[1713941778]: [30.002709471s] [30.002709471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[152937220]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:41:35.073) (total time: 30000ms):
	Trace[152937220]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:42:05.074)
	Trace[152937220]: [30.000887242s] [30.000887242s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1315438823]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:41:35.073) (total time: 30001ms):
	Trace[1315438823]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:42:05.074)
	Trace[1315438823]: [30.001261654s] [30.001261654s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49646->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49662->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49664->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[988847676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:42:32.707) (total time: 10998ms):
	Trace[988847676]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49662->10.96.0.1:443: read: connection reset by peer 10998ms (17:42:43.706)
	Trace[988847676]: [10.998313991s] [10.998313991s] END
	[INFO] plugin/kubernetes: Trace[10955247]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:42:32.708) (total time: 10998ms):
	Trace[10955247]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49664->10.96.0.1:443: read: connection reset by peer 10997ms (17:42:43.705)
	Trace[10955247]: [10.998070558s] [10.998070558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49664->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49662->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[730141636]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:42:32.702) (total time: 11003ms):
	Trace[730141636]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49646->10.96.0.1:443: read: connection reset by peer 11002ms (17:42:43.705)
	Trace[730141636]: [11.003112927s] [11.003112927s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49646->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-970622
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-970622
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=pause-970622
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_41_19_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-970622
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    pause-970622
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f1ddad7b6754810980440d1c321784c
	  System UUID:                5f1ddad7-b675-4810-9804-40d1c321784c
	  Boot ID:                    18f46747-325f-4365-8cff-9ab12676fe46
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-ddmc8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     93s
	  kube-system                 etcd-pause-970622                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-pause-970622             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-pause-970622    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-proxy-9k8tn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-scheduler-pause-970622             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     113s (x7 over 114s)  kubelet          Node pause-970622 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  113s (x8 over 114s)  kubelet          Node pause-970622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 114s)  kubelet          Node pause-970622 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  107s                 kubelet          Node pause-970622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s                 kubelet          Node pause-970622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s                 kubelet          Node pause-970622 status is now: NodeHasSufficientPID
	  Normal  NodeReady                107s                 kubelet          Node pause-970622 status is now: NodeReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           94s                  node-controller  Node pause-970622 event: Registered Node pause-970622 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node pause-970622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node pause-970622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node pause-970622 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-970622 event: Registered Node pause-970622 in Controller
	
	
	==> dmesg <==
	[Apr16 17:41] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.127325] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.215747] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.121947] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.307353] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.996822] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.068336] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.646234] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.554354] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.301350] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.081339] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.290343] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.006180] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[ +11.758958] kauditd_printk_skb: 88 callbacks suppressed
	[Apr16 17:42] systemd-fstab-generator[2390]: Ignoring "noauto" option for root device
	[  +0.128655] systemd-fstab-generator[2402]: Ignoring "noauto" option for root device
	[  +0.179989] systemd-fstab-generator[2416]: Ignoring "noauto" option for root device
	[  +0.134995] systemd-fstab-generator[2428]: Ignoring "noauto" option for root device
	[  +0.287669] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +6.598033] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +0.078039] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.560954] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.371930] systemd-fstab-generator[3315]: Ignoring "noauto" option for root device
	[  +4.126359] kauditd_printk_skb: 38 callbacks suppressed
	[Apr16 17:43] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	
	
	==> etcd [04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507] <==
	{"level":"info","ts":"2024-04-16T17:42:46.635703Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:46.635716Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:46.635312Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:42:46.635916Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f70d523d4475ce3b","initial-advertise-peer-urls":["https://192.168.39.176:2380"],"listen-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:42:46.635961Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:42:46.635331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:46.636053Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:46.646701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=(17801975325160492603)"}
	{"level":"info","ts":"2024-04-16T17:42:46.646789Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","added-peer-id":"f70d523d4475ce3b","added-peer-peer-urls":["https://192.168.39.176:2380"]}
	{"level":"info","ts":"2024-04-16T17:42:46.647037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:46.647092Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:47.800857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:47.800891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:47.800918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgPreVoteResp from f70d523d4475ce3b at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:47.800929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.800935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgVoteResp from f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.800943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became leader at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.800958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f70d523d4475ce3b elected leader f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.806547Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f70d523d4475ce3b","local-member-attributes":"{Name:pause-970622 ClientURLs:[https://192.168.39.176:2379]}","request-path":"/0/members/f70d523d4475ce3b/attributes","cluster-id":"40fea5b1ef9207e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:42:47.80672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:42:47.809149Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.176:2379"}
	{"level":"info","ts":"2024-04-16T17:42:47.809799Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:42:47.811568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T17:42:47.811668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:42:47.811719Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b] <==
	{"level":"info","ts":"2024-04-16T17:42:31.848449Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"59.605517ms"}
	{"level":"info","ts":"2024-04-16T17:42:31.898893Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-16T17:42:31.929679Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","commit-index":457}
	{"level":"info","ts":"2024-04-16T17:42:31.929859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-16T17:42:31.930076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became follower at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:31.930113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f70d523d4475ce3b [peers: [], term: 2, commit: 457, applied: 0, lastindex: 457, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-16T17:42:31.933084Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-16T17:42:31.96578Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":435}
	{"level":"info","ts":"2024-04-16T17:42:31.974711Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-16T17:42:31.993611Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f70d523d4475ce3b","timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:42:31.993956Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f70d523d4475ce3b"}
	{"level":"info","ts":"2024-04-16T17:42:31.99403Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"f70d523d4475ce3b","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-16T17:42:31.995054Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-16T17:42:31.995826Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:31.995893Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:31.995922Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:31.99677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=(17801975325160492603)"}
	{"level":"info","ts":"2024-04-16T17:42:31.997176Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","added-peer-id":"f70d523d4475ce3b","added-peer-peer-urls":["https://192.168.39.176:2380"]}
	{"level":"info","ts":"2024-04-16T17:42:31.997934Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:32.000466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:32.019997Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:42:32.023507Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:32.024571Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:32.035889Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f70d523d4475ce3b","initial-advertise-peer-urls":["https://192.168.39.176:2380"],"listen-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:42:32.036161Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 17:43:06 up 2 min,  0 users,  load average: 1.03, 0.48, 0.18
	Linux pause-970622 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4] <==
	I0416 17:42:49.192033       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 17:42:49.193161       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 17:42:49.193623       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 17:42:49.193660       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 17:42:49.193667       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 17:42:49.205054       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:42:49.205093       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:42:49.205099       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:42:49.211698       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 17:42:49.216453       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:42:49.266505       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:42:49.277213       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:42:49.305277       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 17:42:49.305447       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:42:49.305798       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:42:49.305967       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 17:42:49.306212       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:42:50.108260       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:42:50.959519       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:42:50.972008       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:42:51.009099       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:42:51.035603       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:42:51.046155       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:43:01.727611       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:43:01.729672       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3] <==
	I0416 17:42:32.196338       1 options.go:222] external host was not specified, using 192.168.39.176
	I0416 17:42:32.201772       1 server.go:148] Version: v1.29.3
	I0416 17:42:32.201815       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0416 17:42:32.764022       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:32.764919       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0416 17:42:32.764988       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0416 17:42:32.772925       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0416 17:42:32.772983       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0416 17:42:32.773220       1 instance.go:297] Using reconciler: lease
	W0416 17:42:32.774304       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:33.765494       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:33.765580       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:33.774988       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:35.082680       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:35.085085       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:35.125644       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:37.193116       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:37.211022       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:37.496723       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:40.857129       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:41.670717       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:41.916208       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f] <==
	I0416 17:43:01.768766       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-970622"
	I0416 17:43:01.768833       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0416 17:43:01.768878       1 shared_informer.go:318] Caches are synced for node
	I0416 17:43:01.768908       1 range_allocator.go:174] "Sending events to api server"
	I0416 17:43:01.768951       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0416 17:43:01.768957       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0416 17:43:01.768961       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0416 17:43:01.768992       1 shared_informer.go:318] Caches are synced for job
	I0416 17:43:01.769589       1 event.go:376] "Event occurred" object="pause-970622" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-970622 event: Registered Node pause-970622 in Controller"
	I0416 17:43:01.779112       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0416 17:43:01.779889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="619.714µs"
	I0416 17:43:01.780010       1 shared_informer.go:318] Caches are synced for deployment
	I0416 17:43:01.788961       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0416 17:43:01.803123       1 shared_informer.go:318] Caches are synced for expand
	I0416 17:43:01.816122       1 shared_informer.go:318] Caches are synced for stateful set
	I0416 17:43:01.833071       1 shared_informer.go:318] Caches are synced for attach detach
	I0416 17:43:01.840480       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 17:43:01.855015       1 shared_informer.go:318] Caches are synced for ephemeral
	I0416 17:43:01.860130       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 17:43:01.873871       1 shared_informer.go:318] Caches are synced for PVC protection
	I0416 17:43:01.883754       1 shared_informer.go:318] Caches are synced for persistent volume
	I0416 17:43:01.884577       1 shared_informer.go:318] Caches are synced for HPA
	I0416 17:43:02.301933       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 17:43:02.302105       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0416 17:43:02.308507       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1] <==
	
	
	==> kube-proxy [3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d] <==
	I0416 17:42:32.816398       1 server_others.go:72] "Using iptables proxy"
	E0416 17:42:43.706891       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-970622\": dial tcp 192.168.39.176:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.176:59746->192.168.39.176:8443: read: connection reset by peer"
	E0416 17:42:44.846912       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-970622\": dial tcp 192.168.39.176:8443: connect: connection refused"
	I0416 17:42:49.237002       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0416 17:42:49.322268       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:42:49.322552       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:42:49.322756       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:42:49.328409       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:42:49.330672       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:42:49.330772       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:42:49.333272       1 config.go:188] "Starting service config controller"
	I0416 17:42:49.334087       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:42:49.334140       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:42:49.334869       1 config.go:315] "Starting node config controller"
	I0416 17:42:49.335869       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:42:49.339294       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:42:49.339502       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:42:49.435133       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:42:49.436726       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447] <==
	I0416 17:41:34.814412       1 server_others.go:72] "Using iptables proxy"
	I0416 17:41:34.853248       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0416 17:41:35.072635       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:41:35.074143       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:41:35.074267       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:41:35.078689       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:41:35.079228       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:41:35.079276       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:41:35.081654       1 config.go:188] "Starting service config controller"
	I0416 17:41:35.081938       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:41:35.081993       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:41:35.082018       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:41:35.083799       1 config.go:315] "Starting node config controller"
	I0416 17:41:35.083840       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:41:35.182254       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:41:35.182471       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:41:35.183884       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0] <==
	I0416 17:42:47.377248       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:42:49.209686       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 17:42:49.209795       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:42:49.209912       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:42:49.209940       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:42:49.256308       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 17:42:49.256427       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:42:49.261181       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 17:42:49.261294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:42:49.261307       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:42:49.261323       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:42:49.363815       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc] <==
	I0416 17:42:32.609739       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:42:43.706851       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.176:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.176:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.176:59772->192.168.39.176:8443: read: connection reset by peer
	W0416 17:42:43.706889       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:42:43.706899       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:42:43.719133       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 17:42:43.719202       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:42:43.723409       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:42:43.723526       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 17:42:43.723605       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:42:43.723657       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:42:43.725753       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0416 17:42:43.725892       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0416 17:42:43.727192       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:42:43.728110       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 17:42:43.728218       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0416 17:42:43.729745       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832337    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/360abb604b7c06c559ec13110b94d6e3-ca-certs\") pod \"kube-controller-manager-pause-970622\" (UID: \"360abb604b7c06c559ec13110b94d6e3\") " pod="kube-system/kube-controller-manager-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832527    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/360abb604b7c06c559ec13110b94d6e3-kubeconfig\") pod \"kube-controller-manager-pause-970622\" (UID: \"360abb604b7c06c559ec13110b94d6e3\") " pod="kube-system/kube-controller-manager-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832714    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/360abb604b7c06c559ec13110b94d6e3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-970622\" (UID: \"360abb604b7c06c559ec13110b94d6e3\") " pod="kube-system/kube-controller-manager-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832892    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91e3f824c3bfcc8c5f3c22df6d2732a4-kubeconfig\") pod \"kube-scheduler-pause-970622\" (UID: \"91e3f824c3bfcc8c5f3c22df6d2732a4\") " pod="kube-system/kube-scheduler-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.833050    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f54968e864aab0556b9ac05d7eb288db-etcd-data\") pod \"etcd-pause-970622\" (UID: \"f54968e864aab0556b9ac05d7eb288db\") " pod="kube-system/etcd-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.936795    3322 kubelet_node_status.go:73] "Attempting to register node" node="pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: E0416 17:42:45.937575    3322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.176:8443: connect: connection refused" node="pause-970622"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.097798    3322 scope.go:117] "RemoveContainer" containerID="bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.100264    3322 scope.go:117] "RemoveContainer" containerID="f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.101628    3322 scope.go:117] "RemoveContainer" containerID="7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.103440    3322 scope.go:117] "RemoveContainer" containerID="bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: E0416 17:42:46.229183    3322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-970622?timeout=10s\": dial tcp 192.168.39.176:8443: connect: connection refused" interval="800ms"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.339895    3322 kubelet_node_status.go:73] "Attempting to register node" node="pause-970622"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: E0416 17:42:46.341228    3322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.176:8443: connect: connection refused" node="pause-970622"
	Apr 16 17:42:47 pause-970622 kubelet[3322]: I0416 17:42:47.143032    3322 kubelet_node_status.go:73] "Attempting to register node" node="pause-970622"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.345201    3322 kubelet_node_status.go:112] "Node was previously registered" node="pause-970622"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.345277    3322 kubelet_node_status.go:76] "Successfully registered node" node="pause-970622"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.346846    3322 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.348002    3322 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.597034    3322 apiserver.go:52] "Watching apiserver"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.603447    3322 topology_manager.go:215] "Topology Admit Handler" podUID="474c9f71-8089-4f36-b37c-9fb0639804c3" podNamespace="kube-system" podName="kube-proxy-9k8tn"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.603814    3322 topology_manager.go:215] "Topology Admit Handler" podUID="c012b947-0bb8-47c8-aff6-fb19c9af0145" podNamespace="kube-system" podName="coredns-76f75df574-ddmc8"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.623052    3322 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.665472    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/474c9f71-8089-4f36-b37c-9fb0639804c3-xtables-lock\") pod \"kube-proxy-9k8tn\" (UID: \"474c9f71-8089-4f36-b37c-9fb0639804c3\") " pod="kube-system/kube-proxy-9k8tn"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.665551    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/474c9f71-8089-4f36-b37c-9fb0639804c3-lib-modules\") pod \"kube-proxy-9k8tn\" (UID: \"474c9f71-8089-4f36-b37c-9fb0639804c3\") " pod="kube-system/kube-proxy-9k8tn"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:43:05.290044   58013 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18649-3628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-970622 -n pause-970622
helpers_test.go:261: (dbg) Run:  kubectl --context pause-970622 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-970622 -n pause-970622
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-970622 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-970622 logs -n 25: (1.472234788s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-512869            | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC | 16 Apr 24 17:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-795352             | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC | 16 Apr 24 17:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-795352                              | old-k8s-version-795352       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC | 16 Apr 24 17:38 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:38 UTC | 16 Apr 24 17:38 UTC |
	| start   | -p stopped-upgrade-446675                              | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:38 UTC | 16 Apr 24 17:39 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	| stop    | stopped-upgrade-446675 stop                            | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:39 UTC | 16 Apr 24 17:39 UTC |
	| start   | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:39 UTC | 16 Apr 24 17:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:40 UTC |
	| start   | -p pause-970622 --memory=2048                          | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:42 UTC |
	|         | --install-addons=false                                 |                              |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:42 UTC | 16 Apr 24 17:43 UTC |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:42:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:42:16.064687   57766 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:42:16.064805   57766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:42:16.064816   57766 out.go:304] Setting ErrFile to fd 2...
	I0416 17:42:16.064823   57766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:42:16.065109   57766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:42:16.065733   57766 out.go:298] Setting JSON to false
	I0416 17:42:16.066729   57766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5088,"bootTime":1713284248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:42:16.066810   57766 start.go:139] virtualization: kvm guest
	I0416 17:42:16.069699   57766 out.go:177] * [pause-970622] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:42:16.070884   57766 notify.go:220] Checking for updates...
	I0416 17:42:16.070891   57766 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:42:16.072082   57766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:42:16.073252   57766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:42:16.074328   57766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:42:16.075482   57766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:42:16.076590   57766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:42:16.078195   57766 config.go:182] Loaded profile config "pause-970622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:42:16.078688   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.078748   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.094555   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I0416 17:42:16.095031   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.095569   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.095589   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.095917   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.096090   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.096345   57766 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:42:16.096603   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.096634   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.110721   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0416 17:42:16.111174   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.111596   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.111615   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.111978   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.112175   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.147098   57766 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:42:16.148452   57766 start.go:297] selected driver: kvm2
	I0416 17:42:16.148467   57766 start.go:901] validating driver "kvm2" against &{Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:16.148605   57766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:42:16.149020   57766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:42:16.149113   57766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:42:16.163353   57766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:42:16.164252   57766 cni.go:84] Creating CNI manager for ""
	I0416 17:42:16.164275   57766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:42:16.164353   57766 start.go:340] cluster config:
	{Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:16.164557   57766 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:42:16.166182   57766 out.go:177] * Starting "pause-970622" primary control-plane node in "pause-970622" cluster
	I0416 17:42:16.167463   57766 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:42:16.167506   57766 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:42:16.167515   57766 cache.go:56] Caching tarball of preloaded images
	I0416 17:42:16.167594   57766 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:42:16.167608   57766 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:42:16.167713   57766 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/config.json ...
	I0416 17:42:16.167890   57766 start.go:360] acquireMachinesLock for pause-970622: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:42:16.167928   57766 start.go:364] duration metric: took 21.543µs to acquireMachinesLock for "pause-970622"
	I0416 17:42:16.167938   57766 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:42:16.167947   57766 fix.go:54] fixHost starting: 
	I0416 17:42:16.168253   57766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:42:16.168294   57766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:42:16.182553   57766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0416 17:42:16.182912   57766 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:42:16.183330   57766 main.go:141] libmachine: Using API Version  1
	I0416 17:42:16.183348   57766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:42:16.183646   57766 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:42:16.183853   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.183970   57766 main.go:141] libmachine: (pause-970622) Calling .GetState
	I0416 17:42:16.185520   57766 fix.go:112] recreateIfNeeded on pause-970622: state=Running err=<nil>
	W0416 17:42:16.185539   57766 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:42:16.187252   57766 out.go:177] * Updating the running kvm2 "pause-970622" VM ...
	I0416 17:42:16.188463   57766 machine.go:94] provisionDockerMachine start ...
	I0416 17:42:16.188508   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:16.188695   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.191471   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.191856   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.191882   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.192009   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.192188   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.192328   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.192477   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.192584   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.192761   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.192771   57766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:42:16.298084   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-970622
	
	I0416 17:42:16.298126   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.298482   57766 buildroot.go:166] provisioning hostname "pause-970622"
	I0416 17:42:16.298513   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.298725   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.301325   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.301725   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.301759   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.301930   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.302132   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.302305   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.302493   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.302695   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.302894   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.302910   57766 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-970622 && echo "pause-970622" | sudo tee /etc/hostname
	I0416 17:42:16.426028   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-970622
	
	I0416 17:42:16.426063   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.429199   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.429600   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.429639   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.429832   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.430056   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.430212   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.430379   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.430592   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.430809   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.430828   57766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-970622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-970622/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-970622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:42:16.538293   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:42:16.538325   57766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:42:16.538374   57766 buildroot.go:174] setting up certificates
	I0416 17:42:16.538384   57766 provision.go:84] configureAuth start
	I0416 17:42:16.538398   57766 main.go:141] libmachine: (pause-970622) Calling .GetMachineName
	I0416 17:42:16.538717   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:16.541494   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.541839   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.541880   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.542039   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.544408   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.544730   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.544756   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.544962   57766 provision.go:143] copyHostCerts
	I0416 17:42:16.545018   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:42:16.545041   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:42:16.545125   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:42:16.545263   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:42:16.545276   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:42:16.545326   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:42:16.545413   57766 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:42:16.545423   57766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:42:16.545457   57766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:42:16.545537   57766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.pause-970622 san=[127.0.0.1 192.168.39.176 localhost minikube pause-970622]
	I0416 17:42:16.585110   57766 provision.go:177] copyRemoteCerts
	I0416 17:42:16.585181   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:42:16.585204   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.588049   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.588468   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.588501   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.588700   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.588901   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.589127   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.589304   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:16.674614   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:42:16.707537   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0416 17:42:16.740926   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:42:16.772124   57766 provision.go:87] duration metric: took 233.730002ms to configureAuth
	I0416 17:42:16.772154   57766 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:42:16.772406   57766 config.go:182] Loaded profile config "pause-970622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:42:16.772510   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:16.775240   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.775552   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:16.775601   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:16.775789   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:16.775957   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.776163   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:16.776308   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:16.776468   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:16.776631   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:16.776653   57766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:42:22.391526   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:42:22.391554   57766 machine.go:97] duration metric: took 6.203075471s to provisionDockerMachine
	I0416 17:42:22.391568   57766 start.go:293] postStartSetup for "pause-970622" (driver="kvm2")
	I0416 17:42:22.391581   57766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:42:22.391610   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.391947   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:42:22.391971   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.394425   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.394768   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.394798   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.394917   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.395088   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.395244   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.395368   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.477278   57766 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:42:22.482108   57766 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:42:22.482129   57766 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:42:22.482208   57766 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:42:22.482311   57766 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:42:22.482435   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:42:22.493526   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:42:22.520957   57766 start.go:296] duration metric: took 129.359894ms for postStartSetup
	I0416 17:42:22.520998   57766 fix.go:56] duration metric: took 6.353054008s for fixHost
	I0416 17:42:22.521022   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.523922   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.524251   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.524280   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.524423   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.524641   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.524917   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.525056   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.525243   57766 main.go:141] libmachine: Using SSH client type: native
	I0416 17:42:22.525511   57766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0416 17:42:22.525532   57766 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:42:22.629958   57766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289342.618991630
	
	I0416 17:42:22.629990   57766 fix.go:216] guest clock: 1713289342.618991630
	I0416 17:42:22.630000   57766 fix.go:229] Guest: 2024-04-16 17:42:22.61899163 +0000 UTC Remote: 2024-04-16 17:42:22.521003217 +0000 UTC m=+6.507572872 (delta=97.988413ms)
	I0416 17:42:22.630056   57766 fix.go:200] guest clock delta is within tolerance: 97.988413ms
	I0416 17:42:22.630064   57766 start.go:83] releasing machines lock for "pause-970622", held for 6.462129483s
	I0416 17:42:22.630096   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.630360   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:22.633198   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.633601   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.633630   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.633830   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634328   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634497   57766 main.go:141] libmachine: (pause-970622) Calling .DriverName
	I0416 17:42:22.634572   57766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:42:22.634613   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.634698   57766 ssh_runner.go:195] Run: cat /version.json
	I0416 17:42:22.634723   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHHostname
	I0416 17:42:22.637123   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637450   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637481   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.637500   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.637659   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.637841   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.637895   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:22.637948   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:22.638020   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.638088   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHPort
	I0416 17:42:22.638225   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.638243   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHKeyPath
	I0416 17:42:22.638398   57766 main.go:141] libmachine: (pause-970622) Calling .GetSSHUsername
	I0416 17:42:22.638566   57766 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/pause-970622/id_rsa Username:docker}
	I0416 17:42:22.745845   57766 ssh_runner.go:195] Run: systemctl --version
	I0416 17:42:22.753300   57766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:42:22.911391   57766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:42:22.918725   57766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:42:22.918783   57766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:42:22.928742   57766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 17:42:22.928762   57766 start.go:494] detecting cgroup driver to use...
	I0416 17:42:22.928829   57766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:42:22.946142   57766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:42:22.962395   57766 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:42:22.962444   57766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:42:22.977464   57766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:42:22.991813   57766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:42:23.124475   57766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:42:23.263793   57766 docker.go:233] disabling docker service ...
	I0416 17:42:23.263882   57766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:42:23.285179   57766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:42:23.301794   57766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:42:23.437427   57766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:42:23.564625   57766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:42:23.579837   57766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:42:23.603432   57766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:42:23.603503   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.615464   57766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:42:23.615531   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.627243   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.638519   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.649491   57766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:42:23.660860   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.672379   57766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.684949   57766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:42:23.696353   57766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:42:23.706612   57766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:42:23.717299   57766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:42:23.863740   57766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:42:29.929060   57766 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.065271729s)
	I0416 17:42:29.929092   57766 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:42:29.929157   57766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:42:29.935007   57766 start.go:562] Will wait 60s for crictl version
	I0416 17:42:29.935058   57766 ssh_runner.go:195] Run: which crictl
	I0416 17:42:29.939727   57766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:42:29.990234   57766 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:42:29.990344   57766 ssh_runner.go:195] Run: crio --version
	I0416 17:42:30.031923   57766 ssh_runner.go:195] Run: crio --version
	I0416 17:42:30.073505   57766 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:42:30.074763   57766 main.go:141] libmachine: (pause-970622) Calling .GetIP
	I0416 17:42:30.077893   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:30.078312   57766 main.go:141] libmachine: (pause-970622) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:02:47", ip: ""} in network mk-pause-970622: {Iface:virbr1 ExpiryTime:2024-04-16 18:40:54 +0000 UTC Type:0 Mac:52:54:00:37:02:47 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-970622 Clientid:01:52:54:00:37:02:47}
	I0416 17:42:30.078335   57766 main.go:141] libmachine: (pause-970622) DBG | domain pause-970622 has defined IP address 192.168.39.176 and MAC address 52:54:00:37:02:47 in network mk-pause-970622
	I0416 17:42:30.078591   57766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:42:30.083804   57766 kubeadm.go:877] updating cluster {Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:42:30.083933   57766 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:42:30.083973   57766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:42:30.139181   57766 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:42:30.139202   57766 crio.go:433] Images already preloaded, skipping extraction
	I0416 17:42:30.139251   57766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:42:30.176379   57766 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:42:30.176402   57766 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:42:30.176410   57766 kubeadm.go:928] updating node { 192.168.39.176 8443 v1.29.3 crio true true} ...
	I0416 17:42:30.176507   57766 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-970622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:42:30.176586   57766 ssh_runner.go:195] Run: crio config
	I0416 17:42:30.235761   57766 cni.go:84] Creating CNI manager for ""
	I0416 17:42:30.235787   57766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:42:30.235805   57766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:42:30.235838   57766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-970622 NodeName:pause-970622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:42:30.235999   57766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-970622"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:42:30.236077   57766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:42:30.248729   57766 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:42:30.248805   57766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:42:30.259932   57766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0416 17:42:30.279237   57766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:42:30.297756   57766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0416 17:42:30.316328   57766 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I0416 17:42:30.320886   57766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:42:30.458429   57766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:42:30.476656   57766 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622 for IP: 192.168.39.176
	I0416 17:42:30.476687   57766 certs.go:194] generating shared ca certs ...
	I0416 17:42:30.476704   57766 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:42:30.476873   57766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:42:30.476922   57766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:42:30.476936   57766 certs.go:256] generating profile certs ...
	I0416 17:42:30.477038   57766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/client.key
	I0416 17:42:30.477122   57766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.key.017177e3
	I0416 17:42:30.477208   57766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.key
	I0416 17:42:30.477345   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:42:30.477383   57766 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:42:30.477397   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:42:30.477437   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:42:30.477469   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:42:30.477511   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:42:30.477570   57766 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:42:30.478318   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:42:30.506798   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:42:30.535507   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:42:30.565525   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:42:30.595303   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0416 17:42:30.625867   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 17:42:30.658552   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:42:30.687664   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/pause-970622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:42:30.714776   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:42:30.800977   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:42:30.893218   57766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:42:31.107913   57766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:42:31.143884   57766 ssh_runner.go:195] Run: openssl version
	I0416 17:42:31.188299   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:42:31.361460   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.395338   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.395397   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:42:31.456853   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:42:31.558276   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:42:31.618067   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.629106   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.629170   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:42:31.641644   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:42:31.683687   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:42:31.715784   57766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.726103   57766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.726158   57766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:42:31.805323   57766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:42:31.837420   57766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:42:31.849378   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:42:31.864850   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:42:31.889616   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:42:31.932682   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:42:31.948249   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:42:31.963764   57766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:42:31.975150   57766 kubeadm.go:391] StartCluster: {Name:pause-970622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-970622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:42:31.975263   57766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:42:31.975307   57766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:42:32.085675   57766 cri.go:89] found id: "bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b"
	I0416 17:42:32.085703   57766 cri.go:89] found id: "7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1"
	I0416 17:42:32.085714   57766 cri.go:89] found id: "f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3"
	I0416 17:42:32.085719   57766 cri.go:89] found id: "bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc"
	I0416 17:42:32.085723   57766 cri.go:89] found id: "7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d"
	I0416 17:42:32.085737   57766 cri.go:89] found id: "9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447"
	I0416 17:42:32.085741   57766 cri.go:89] found id: "42d0967f68b2840f48e454f17063550bc595ee48a3129e743331163fb511fadb"
	I0416 17:42:32.085745   57766 cri.go:89] found id: "a7aca58c6cf26bf99d8c2e3e79dbb19b6626cebe102517cd978ba6cec252a6b0"
	I0416 17:42:32.085752   57766 cri.go:89] found id: "d365abb1c89be710e9b03f2ed845bcbf9cccca66c03853bbbdef1a0381987a52"
	I0416 17:42:32.085760   57766 cri.go:89] found id: "9c28d5fcbca20ac35f97ddd4dc7be237a460e6ae62f71c6bc3ae1dff833832c4"
	I0416 17:42:32.085768   57766 cri.go:89] found id: ""
	I0416 17:42:32.085819   57766 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.814185866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46ddc7a2-0d35-44a9-b44c-9e37c5076c81 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.815484640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4436cb0d-d527-4ebb-bb3f-57d609b188c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.815969021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289387815938099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4436cb0d-d527-4ebb-bb3f-57d609b188c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.818266216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae1eabe3-f14a-4a49-9e9d-70d230d7cef6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.818320473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae1eabe3-f14a-4a49-9e9d-70d230d7cef6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.818945946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289366123803663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289366135777600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289366161721741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289366158815362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,PodSandboxId:9a2044d33e9477f5468ef7619ecfaa96da2f5cc06ceb099ea113b7a257c21c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289352164297558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,PodSandboxId:a85bea2f87b5acb76e2007f68f4758dc85eedbcd7fc0c0da9a0ec33f6fb8d26c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713289351501957598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io
.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713289351314241599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289351322872699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289351240517201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713289351079821561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d,PodSandboxId:4dc0eedaba0edbe8beba9af3390fa57b3c3e68a47b01be442f8d5b27180eaec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713289294797282553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447,PodSandboxId:c9a75b9c6519646a99cb7856bd465b4ebb3830f4f466cc2f2dbf6d02fc329f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713289294357318585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae1eabe3-f14a-4a49-9e9d-70d230d7cef6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.875499268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd236558-2d49-4df1-8b38-875f0cdbd5d2 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.875606380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd236558-2d49-4df1-8b38-875f0cdbd5d2 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.877121679Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=402bee63-2c99-41e2-a11a-b925c0a2e2fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.877537095Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f82f2146-4db4-4838-8ac9-4b14a7ba57a0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.877587822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289387877562710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=402bee63-2c99-41e2-a11a-b925c0a2e2fc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.877780552Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9a2044d33e9477f5468ef7619ecfaa96da2f5cc06ceb099ea113b7a257c21c20,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-ddmc8,Uid:c012b947-0bb8-47c8-aff6-fb19c9af0145,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713289351122625219,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T17:41:33.826946631Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-970622,Uid:360abb604b7c06c559ec13110b94d6e3,Namespace:kub
e-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713289350850493066,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 360abb604b7c06c559ec13110b94d6e3,kubernetes.io/config.seen: 2024-04-16T17:41:19.385339517Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&PodSandboxMetadata{Name:etcd-pause-970622,Uid:f54968e864aab0556b9ac05d7eb288db,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713289350840937106,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,tier: cont
rol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.176:2379,kubernetes.io/config.hash: f54968e864aab0556b9ac05d7eb288db,kubernetes.io/config.seen: 2024-04-16T17:41:19.385332821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a85bea2f87b5acb76e2007f68f4758dc85eedbcd7fc0c0da9a0ec33f6fb8d26c,Metadata:&PodSandboxMetadata{Name:kube-proxy-9k8tn,Uid:474c9f71-8089-4f36-b37c-9fb0639804c3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713289350798155139,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T17:41:33.691684413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8776bf30dc24a653b67819ffdfec8a6b9
f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-970622,Uid:91e3f824c3bfcc8c5f3c22df6d2732a4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713289350778121515,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 91e3f824c3bfcc8c5f3c22df6d2732a4,kubernetes.io/config.seen: 2024-04-16T17:41:19.385405199Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-970622,Uid:e6ef930f623848dab209c9e1b14b0548,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713289350771888572,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.176:8443,kubernetes.io/config.hash: e6ef930f623848dab209c9e1b14b0548,kubernetes.io/config.seen: 2024-04-16T17:41:19.385338141Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4dc0eedaba0edbe8beba9af3390fa57b3c3e68a47b01be442f8d5b27180eaec3,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-ddmc8,Uid:c012b947-0bb8-47c8-aff6-fb19c9af0145,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713289294140212267,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-04-16T17:41:33.826946631Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:423f16965a399a785555531ac63db58f7b01e58ade00d70b527227fe314f4f0f,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-sgc9g,Uid:20131cd9-1e3b-424c-af02-2f747b973b8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713289294083801281,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-sgc9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20131cd9-1e3b-424c-af02-2f747b973b8c,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T17:41:33.765314620Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9a75b9c6519646a99cb7856bd465b4ebb3830f4f466cc2f2dbf6d02fc329f5c,Metadata:&PodSandboxMetadata{Name:kube-proxy-9k8tn,Uid:474c9f71-8089-4f36-b37c-9fb0639804c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713289294005744783,
Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T17:41:33.691684413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f82f2146-4db4-4838-8ac9-4b14a7ba57a0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.878493639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3557d21d-ec32-46e6-9690-d026c8ea9edf name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.878542457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3557d21d-ec32-46e6-9690-d026c8ea9edf name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.878772970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289366123803663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289366135777600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289366161721741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289366158815362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,PodSandboxId:9a2044d33e9477f5468ef7619ecfaa96da2f5cc06ceb099ea113b7a257c21c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289352164297558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,PodSandboxId:a85bea2f87b5acb76e2007f68f4758dc85eedbcd7fc0c0da9a0ec33f6fb8d26c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713289351501957598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io
.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713289351314241599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289351322872699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289351240517201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713289351079821561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d,PodSandboxId:4dc0eedaba0edbe8beba9af3390fa57b3c3e68a47b01be442f8d5b27180eaec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713289294797282553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447,PodSandboxId:c9a75b9c6519646a99cb7856bd465b4ebb3830f4f466cc2f2dbf6d02fc329f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713289294357318585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3557d21d-ec32-46e6-9690-d026c8ea9edf name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.878827633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=898e6f31-cd46-4671-8039-aa6af745a813 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.879540493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=898e6f31-cd46-4671-8039-aa6af745a813 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.879745040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289366123803663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289366135777600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289366161721741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289366158815362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,PodSandboxId:9a2044d33e9477f5468ef7619ecfaa96da2f5cc06ceb099ea113b7a257c21c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289352164297558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,PodSandboxId:a85bea2f87b5acb76e2007f68f4758dc85eedbcd7fc0c0da9a0ec33f6fb8d26c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713289351501957598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io
.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713289351314241599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289351322872699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289351240517201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713289351079821561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d,PodSandboxId:4dc0eedaba0edbe8beba9af3390fa57b3c3e68a47b01be442f8d5b27180eaec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713289294797282553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447,PodSandboxId:c9a75b9c6519646a99cb7856bd465b4ebb3830f4f466cc2f2dbf6d02fc329f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713289294357318585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=898e6f31-cd46-4671-8039-aa6af745a813 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.927616479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a251bf66-1a93-4a30-a1b9-b5eec2e43320 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.927699055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a251bf66-1a93-4a30-a1b9-b5eec2e43320 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.929154594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34ebf69e-10b0-4f58-84be-a89e2714c367 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.929818295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289387929791268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34ebf69e-10b0-4f58-84be-a89e2714c367 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.930684028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=155ce685-c339-44bf-8616-0ce6ea2dfe0b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.930814287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=155ce685-c339-44bf-8616-0ce6ea2dfe0b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:43:07 pause-970622 crio[2469]: time="2024-04-16 17:43:07.931090646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289366123803663,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289366135777600,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289366161721741,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289366158815362,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817,PodSandboxId:9a2044d33e9477f5468ef7619ecfaa96da2f5cc06ceb099ea113b7a257c21c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289352164297558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d,PodSandboxId:a85bea2f87b5acb76e2007f68f4758dc85eedbcd7fc0c0da9a0ec33f6fb8d26c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713289351501957598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io
.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1,PodSandboxId:ec0d113699e3f6e947c77f0e812a492ddf80a3e869c41c84e378bcfd7f5e13ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713289351314241599,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360abb604b7c06c559ec13110b94d6e3,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b,PodSandboxId:9f867bb17f7f29f39f2e69340d33c478e17b5cf08aef9c14546fdc7257c9983a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713289351322872699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f54968e864aab0556b9ac05d7eb288db,},Annotations:map[string]string{io.kubernetes.container.hash: d1d25233,io.k
ubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3,PodSandboxId:7c1a756460c1ac4b07ce5a3d8acdf6c2b384ac06260c44734b93bdaa19eeb1a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289351240517201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ef930f623848dab209c9e1b14b0548,},Annotations:map[string]string{io.kubernetes.container.hash: b922b014,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc,PodSandboxId:8776bf30dc24a653b67819ffdfec8a6b9f0c7e0c85e930855c3ef8dc18e7be19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713289351079821561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-970622,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91e3f824c3bfcc8c5f3c22df6d2732a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d,PodSandboxId:4dc0eedaba0edbe8beba9af3390fa57b3c3e68a47b01be442f8d5b27180eaec3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713289294797282553,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ddmc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c012b947-0bb8-47c8-aff6-fb19c9af0145,},Annotations:map[string]string{io.kubernetes.container.hash: df2a3173,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447,PodSandboxId:c9a75b9c6519646a99cb7856bd465b4ebb3830f4f466cc2f2dbf6d02fc329f5c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713289294357318585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9k8tn,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 474c9f71-8089-4f36-b37c-9fb0639804c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7109d839,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=155ce685-c339-44bf-8616-0ce6ea2dfe0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2e62264fb71e5       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   21 seconds ago       Running             kube-controller-manager   2                   ec0d113699e3f       kube-controller-manager-pause-970622
	db002313ebce2       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   21 seconds ago       Running             kube-apiserver            2                   7c1a756460c1a       kube-apiserver-pause-970622
	3bc99a9165aa6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   21 seconds ago       Running             kube-scheduler            2                   8776bf30dc24a       kube-scheduler-pause-970622
	04e0dddad44cd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago       Running             etcd                      2                   9f867bb17f7f2       etcd-pause-970622
	f88157dbb4291       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   35 seconds ago       Running             coredns                   1                   9a2044d33e947       coredns-76f75df574-ddmc8
	3d9b87bb8ec76       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   36 seconds ago       Running             kube-proxy                1                   a85bea2f87b5a       kube-proxy-9k8tn
	bfcc4acb5fcda       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   36 seconds ago       Exited              etcd                      1                   9f867bb17f7f2       etcd-pause-970622
	7f9eb39bd3e31       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   36 seconds ago       Exited              kube-controller-manager   1                   ec0d113699e3f       kube-controller-manager-pause-970622
	f498c505f3d02       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   36 seconds ago       Exited              kube-apiserver            1                   7c1a756460c1a       kube-apiserver-pause-970622
	bf820d6141334       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   36 seconds ago       Exited              kube-scheduler            1                   8776bf30dc24a       kube-scheduler-pause-970622
	7f1e6f554ece7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   4dc0eedaba0ed       coredns-76f75df574-ddmc8
	9d688fdfab050       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   About a minute ago   Exited              kube-proxy                0                   c9a75b9c65196       kube-proxy-9k8tn
	
	
	==> coredns [7f1e6f554ece7f254dbabe3e9d9fb001d40827743e0a580ad0babea6db84bd2d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49933 - 52742 "HINFO IN 6829272663999387613.5345536729571950507. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008738678s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1713941778]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:41:35.071) (total time: 30002ms):
	Trace[1713941778]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (17:42:05.073)
	Trace[1713941778]: [30.002709471s] [30.002709471s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[152937220]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:41:35.073) (total time: 30000ms):
	Trace[152937220]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:42:05.074)
	Trace[152937220]: [30.000887242s] [30.000887242s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1315438823]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:41:35.073) (total time: 30001ms):
	Trace[1315438823]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (17:42:05.074)
	Trace[1315438823]: [30.001261654s] [30.001261654s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f88157dbb4291ac8f6c833e54caf757e77f1d4291680ec4e3857e8dd63348817] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49646->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49662->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49664->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[988847676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:42:32.707) (total time: 10998ms):
	Trace[988847676]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49662->10.96.0.1:443: read: connection reset by peer 10998ms (17:42:43.706)
	Trace[988847676]: [10.998313991s] [10.998313991s] END
	[INFO] plugin/kubernetes: Trace[10955247]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:42:32.708) (total time: 10998ms):
	Trace[10955247]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49664->10.96.0.1:443: read: connection reset by peer 10997ms (17:42:43.705)
	Trace[10955247]: [10.998070558s] [10.998070558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49664->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49662->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[730141636]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 17:42:32.702) (total time: 11003ms):
	Trace[730141636]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49646->10.96.0.1:443: read: connection reset by peer 11002ms (17:42:43.705)
	Trace[730141636]: [11.003112927s] [11.003112927s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:49646->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-970622
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-970622
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=pause-970622
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_41_19_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-970622
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:42:49 +0000   Tue, 16 Apr 2024 17:41:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    pause-970622
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f1ddad7b6754810980440d1c321784c
	  System UUID:                5f1ddad7-b675-4810-9804-40d1c321784c
	  Boot ID:                    18f46747-325f-4365-8cff-9ab12676fe46
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-ddmc8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     95s
	  kube-system                 etcd-pause-970622                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         110s
	  kube-system                 kube-apiserver-pause-970622             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-controller-manager-pause-970622    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-proxy-9k8tn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-scheduler-pause-970622             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     115s (x7 over 116s)  kubelet          Node pause-970622 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s (x8 over 116s)  kubelet          Node pause-970622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 116s)  kubelet          Node pause-970622 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node pause-970622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node pause-970622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node pause-970622 status is now: NodeHasSufficientPID
	  Normal  NodeReady                109s                 kubelet          Node pause-970622 status is now: NodeReady
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           96s                  node-controller  Node pause-970622 event: Registered Node pause-970622 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-970622 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-970622 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-970622 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-970622 event: Registered Node pause-970622 in Controller
	
	
	==> dmesg <==
	[Apr16 17:41] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.127325] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.215747] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.121947] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.307353] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.996822] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.068336] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.646234] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.554354] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.301350] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.081339] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.290343] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.006180] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[ +11.758958] kauditd_printk_skb: 88 callbacks suppressed
	[Apr16 17:42] systemd-fstab-generator[2390]: Ignoring "noauto" option for root device
	[  +0.128655] systemd-fstab-generator[2402]: Ignoring "noauto" option for root device
	[  +0.179989] systemd-fstab-generator[2416]: Ignoring "noauto" option for root device
	[  +0.134995] systemd-fstab-generator[2428]: Ignoring "noauto" option for root device
	[  +0.287669] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +6.598033] systemd-fstab-generator[2582]: Ignoring "noauto" option for root device
	[  +0.078039] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.560954] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.371930] systemd-fstab-generator[3315]: Ignoring "noauto" option for root device
	[  +4.126359] kauditd_printk_skb: 38 callbacks suppressed
	[Apr16 17:43] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	
	
	==> etcd [04e0dddad44cd6dceba3e5ad3549af212f077952d6c7c43dd5bfd8179ab8b507] <==
	{"level":"info","ts":"2024-04-16T17:42:46.635703Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:46.635716Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:46.635312Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:42:46.635916Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f70d523d4475ce3b","initial-advertise-peer-urls":["https://192.168.39.176:2380"],"listen-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:42:46.635961Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T17:42:46.635331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:46.636053Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:46.646701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=(17801975325160492603)"}
	{"level":"info","ts":"2024-04-16T17:42:46.646789Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","added-peer-id":"f70d523d4475ce3b","added-peer-peer-urls":["https://192.168.39.176:2380"]}
	{"level":"info","ts":"2024-04-16T17:42:46.647037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:46.647092Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:47.800857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:47.800891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:47.800918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgPreVoteResp from f70d523d4475ce3b at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:47.800929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.800935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgVoteResp from f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.800943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became leader at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.800958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f70d523d4475ce3b elected leader f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-04-16T17:42:47.806547Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f70d523d4475ce3b","local-member-attributes":"{Name:pause-970622 ClientURLs:[https://192.168.39.176:2379]}","request-path":"/0/members/f70d523d4475ce3b/attributes","cluster-id":"40fea5b1ef9207e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:42:47.80672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:42:47.809149Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.176:2379"}
	{"level":"info","ts":"2024-04-16T17:42:47.809799Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T17:42:47.811568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T17:42:47.811668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T17:42:47.811719Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b] <==
	{"level":"info","ts":"2024-04-16T17:42:31.848449Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"59.605517ms"}
	{"level":"info","ts":"2024-04-16T17:42:31.898893Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-16T17:42:31.929679Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","commit-index":457}
	{"level":"info","ts":"2024-04-16T17:42:31.929859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-16T17:42:31.930076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became follower at term 2"}
	{"level":"info","ts":"2024-04-16T17:42:31.930113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f70d523d4475ce3b [peers: [], term: 2, commit: 457, applied: 0, lastindex: 457, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-16T17:42:31.933084Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-16T17:42:31.96578Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":435}
	{"level":"info","ts":"2024-04-16T17:42:31.974711Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-16T17:42:31.993611Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f70d523d4475ce3b","timeout":"7s"}
	{"level":"info","ts":"2024-04-16T17:42:31.993956Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f70d523d4475ce3b"}
	{"level":"info","ts":"2024-04-16T17:42:31.99403Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"f70d523d4475ce3b","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-16T17:42:31.995054Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-16T17:42:31.995826Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:31.995893Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:31.995922Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T17:42:31.99677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=(17801975325160492603)"}
	{"level":"info","ts":"2024-04-16T17:42:31.997176Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","added-peer-id":"f70d523d4475ce3b","added-peer-peer-urls":["https://192.168.39.176:2380"]}
	{"level":"info","ts":"2024-04-16T17:42:31.997934Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:32.000466Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T17:42:32.019997Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T17:42:32.023507Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:32.024571Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-16T17:42:32.035889Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f70d523d4475ce3b","initial-advertise-peer-urls":["https://192.168.39.176:2380"],"listen-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T17:42:32.036161Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 17:43:08 up 2 min,  0 users,  load average: 0.95, 0.47, 0.18
	Linux pause-970622 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [db002313ebce2c686af01f94d9f76b48926c252d92d5e3d2b62c12566b27cff4] <==
	I0416 17:42:49.192033       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 17:42:49.193161       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 17:42:49.193623       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 17:42:49.193660       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 17:42:49.193667       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 17:42:49.205054       1 aggregator.go:165] initial CRD sync complete...
	I0416 17:42:49.205093       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 17:42:49.205099       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 17:42:49.211698       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 17:42:49.216453       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 17:42:49.266505       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 17:42:49.277213       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 17:42:49.305277       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 17:42:49.305447       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 17:42:49.305798       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 17:42:49.305967       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 17:42:49.306212       1 cache.go:39] Caches are synced for autoregister controller
	I0416 17:42:50.108260       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 17:42:50.959519       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 17:42:50.972008       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 17:42:51.009099       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 17:42:51.035603       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 17:42:51.046155       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 17:43:01.727611       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 17:43:01.729672       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3] <==
	I0416 17:42:32.196338       1 options.go:222] external host was not specified, using 192.168.39.176
	I0416 17:42:32.201772       1 server.go:148] Version: v1.29.3
	I0416 17:42:32.201815       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0416 17:42:32.764022       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:32.764919       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0416 17:42:32.764988       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0416 17:42:32.772925       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0416 17:42:32.772983       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0416 17:42:32.773220       1 instance.go:297] Using reconciler: lease
	W0416 17:42:32.774304       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:33.765494       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:33.765580       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:33.774988       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:35.082680       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:35.085085       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:35.125644       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:37.193116       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:37.211022       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:37.496723       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:40.857129       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:41.670717       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:42:41.916208       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2e62264fb71e5450d384276e1bccd20d695157c0a6b42fb5f43df7eaea1abb2f] <==
	I0416 17:43:01.768766       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-970622"
	I0416 17:43:01.768833       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0416 17:43:01.768878       1 shared_informer.go:318] Caches are synced for node
	I0416 17:43:01.768908       1 range_allocator.go:174] "Sending events to api server"
	I0416 17:43:01.768951       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0416 17:43:01.768957       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0416 17:43:01.768961       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0416 17:43:01.768992       1 shared_informer.go:318] Caches are synced for job
	I0416 17:43:01.769589       1 event.go:376] "Event occurred" object="pause-970622" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-970622 event: Registered Node pause-970622 in Controller"
	I0416 17:43:01.779112       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0416 17:43:01.779889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="619.714µs"
	I0416 17:43:01.780010       1 shared_informer.go:318] Caches are synced for deployment
	I0416 17:43:01.788961       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0416 17:43:01.803123       1 shared_informer.go:318] Caches are synced for expand
	I0416 17:43:01.816122       1 shared_informer.go:318] Caches are synced for stateful set
	I0416 17:43:01.833071       1 shared_informer.go:318] Caches are synced for attach detach
	I0416 17:43:01.840480       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 17:43:01.855015       1 shared_informer.go:318] Caches are synced for ephemeral
	I0416 17:43:01.860130       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 17:43:01.873871       1 shared_informer.go:318] Caches are synced for PVC protection
	I0416 17:43:01.883754       1 shared_informer.go:318] Caches are synced for persistent volume
	I0416 17:43:01.884577       1 shared_informer.go:318] Caches are synced for HPA
	I0416 17:43:02.301933       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 17:43:02.302105       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0416 17:43:02.308507       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1] <==
	
	
	==> kube-proxy [3d9b87bb8ec76511db280558326fd14ea1b15f1fb5d1ce9dc4bd55e8bd01810d] <==
	I0416 17:42:32.816398       1 server_others.go:72] "Using iptables proxy"
	E0416 17:42:43.706891       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-970622\": dial tcp 192.168.39.176:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.176:59746->192.168.39.176:8443: read: connection reset by peer"
	E0416 17:42:44.846912       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-970622\": dial tcp 192.168.39.176:8443: connect: connection refused"
	I0416 17:42:49.237002       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0416 17:42:49.322268       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:42:49.322552       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:42:49.322756       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:42:49.328409       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:42:49.330672       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:42:49.330772       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:42:49.333272       1 config.go:188] "Starting service config controller"
	I0416 17:42:49.334087       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:42:49.334140       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:42:49.334869       1 config.go:315] "Starting node config controller"
	I0416 17:42:49.335869       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:42:49.339294       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:42:49.339502       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:42:49.435133       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:42:49.436726       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [9d688fdfab0507ce312ccc0780112745141a8f6884641b28854e7e7e8a9a7447] <==
	I0416 17:41:34.814412       1 server_others.go:72] "Using iptables proxy"
	I0416 17:41:34.853248       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0416 17:41:35.072635       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:41:35.074143       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:41:35.074267       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:41:35.078689       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:41:35.079228       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:41:35.079276       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:41:35.081654       1 config.go:188] "Starting service config controller"
	I0416 17:41:35.081938       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:41:35.081993       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:41:35.082018       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:41:35.083799       1 config.go:315] "Starting node config controller"
	I0416 17:41:35.083840       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:41:35.182254       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:41:35.182471       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:41:35.183884       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3bc99a9165aa68cb2ef4f2e1da976be0408364c29980e2202efa6bc64bc5d4e0] <==
	I0416 17:42:47.377248       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:42:49.209686       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 17:42:49.209795       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:42:49.209912       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:42:49.209940       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:42:49.256308       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 17:42:49.256427       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:42:49.261181       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 17:42:49.261294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:42:49.261307       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:42:49.261323       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:42:49.363815       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc] <==
	I0416 17:42:32.609739       1 serving.go:380] Generated self-signed cert in-memory
	W0416 17:42:43.706851       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.39.176:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.176:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.176:59772->192.168.39.176:8443: read: connection reset by peer
	W0416 17:42:43.706889       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 17:42:43.706899       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 17:42:43.719133       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 17:42:43.719202       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:42:43.723409       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:42:43.723526       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 17:42:43.723605       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:42:43.723657       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:42:43.725753       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0416 17:42:43.725892       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0416 17:42:43.727192       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 17:42:43.728110       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 17:42:43.728218       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0416 17:42:43.729745       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832337    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/360abb604b7c06c559ec13110b94d6e3-ca-certs\") pod \"kube-controller-manager-pause-970622\" (UID: \"360abb604b7c06c559ec13110b94d6e3\") " pod="kube-system/kube-controller-manager-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832527    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/360abb604b7c06c559ec13110b94d6e3-kubeconfig\") pod \"kube-controller-manager-pause-970622\" (UID: \"360abb604b7c06c559ec13110b94d6e3\") " pod="kube-system/kube-controller-manager-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832714    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/360abb604b7c06c559ec13110b94d6e3-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-970622\" (UID: \"360abb604b7c06c559ec13110b94d6e3\") " pod="kube-system/kube-controller-manager-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.832892    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91e3f824c3bfcc8c5f3c22df6d2732a4-kubeconfig\") pod \"kube-scheduler-pause-970622\" (UID: \"91e3f824c3bfcc8c5f3c22df6d2732a4\") " pod="kube-system/kube-scheduler-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.833050    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f54968e864aab0556b9ac05d7eb288db-etcd-data\") pod \"etcd-pause-970622\" (UID: \"f54968e864aab0556b9ac05d7eb288db\") " pod="kube-system/etcd-pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: I0416 17:42:45.936795    3322 kubelet_node_status.go:73] "Attempting to register node" node="pause-970622"
	Apr 16 17:42:45 pause-970622 kubelet[3322]: E0416 17:42:45.937575    3322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.176:8443: connect: connection refused" node="pause-970622"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.097798    3322 scope.go:117] "RemoveContainer" containerID="bfcc4acb5fcdaace8031359d1315755286a7075a299fba64a8541bd0b1c3dd5b"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.100264    3322 scope.go:117] "RemoveContainer" containerID="f498c505f3d028c30e644bd5d4bb40fed0dfbb7597ece811afe4df000ae081c3"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.101628    3322 scope.go:117] "RemoveContainer" containerID="7f9eb39bd3e310684919a396ebeb3b62af108c22f281435109aa23b468c1eff1"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.103440    3322 scope.go:117] "RemoveContainer" containerID="bf820d6141334bbe4300165ff362be531d8944b5466587f30a2dff2fd1aa9ebc"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: E0416 17:42:46.229183    3322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-970622?timeout=10s\": dial tcp 192.168.39.176:8443: connect: connection refused" interval="800ms"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: I0416 17:42:46.339895    3322 kubelet_node_status.go:73] "Attempting to register node" node="pause-970622"
	Apr 16 17:42:46 pause-970622 kubelet[3322]: E0416 17:42:46.341228    3322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.176:8443: connect: connection refused" node="pause-970622"
	Apr 16 17:42:47 pause-970622 kubelet[3322]: I0416 17:42:47.143032    3322 kubelet_node_status.go:73] "Attempting to register node" node="pause-970622"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.345201    3322 kubelet_node_status.go:112] "Node was previously registered" node="pause-970622"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.345277    3322 kubelet_node_status.go:76] "Successfully registered node" node="pause-970622"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.346846    3322 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.348002    3322 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.597034    3322 apiserver.go:52] "Watching apiserver"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.603447    3322 topology_manager.go:215] "Topology Admit Handler" podUID="474c9f71-8089-4f36-b37c-9fb0639804c3" podNamespace="kube-system" podName="kube-proxy-9k8tn"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.603814    3322 topology_manager.go:215] "Topology Admit Handler" podUID="c012b947-0bb8-47c8-aff6-fb19c9af0145" podNamespace="kube-system" podName="coredns-76f75df574-ddmc8"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.623052    3322 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.665472    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/474c9f71-8089-4f36-b37c-9fb0639804c3-xtables-lock\") pod \"kube-proxy-9k8tn\" (UID: \"474c9f71-8089-4f36-b37c-9fb0639804c3\") " pod="kube-system/kube-proxy-9k8tn"
	Apr 16 17:42:49 pause-970622 kubelet[3322]: I0416 17:42:49.665551    3322 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/474c9f71-8089-4f36-b37c-9fb0639804c3-lib-modules\") pod \"kube-proxy-9k8tn\" (UID: \"474c9f71-8089-4f36-b37c-9fb0639804c3\") " pod="kube-system/kube-proxy-9k8tn"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:43:07.460045   58140 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18649-3628/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-970622 -n pause-970622
helpers_test.go:261: (dbg) Run:  kubectl --context pause-970622 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-304316 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-304316 --alsologtostderr -v=3: exit status 82 (2m0.550962054s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-304316"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:44:25.212453   58802 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:44:25.212726   58802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:44:25.212740   58802 out.go:304] Setting ErrFile to fd 2...
	I0416 17:44:25.212746   58802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:44:25.213049   58802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:44:25.213344   58802 out.go:298] Setting JSON to false
	I0416 17:44:25.213449   58802 mustload.go:65] Loading cluster: default-k8s-diff-port-304316
	I0416 17:44:25.213764   58802 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:44:25.213826   58802 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/config.json ...
	I0416 17:44:25.213987   58802 mustload.go:65] Loading cluster: default-k8s-diff-port-304316
	I0416 17:44:25.214082   58802 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:44:25.214110   58802 stop.go:39] StopHost: default-k8s-diff-port-304316
	I0416 17:44:25.214455   58802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:44:25.214495   58802 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:44:25.229560   58802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38861
	I0416 17:44:25.229973   58802 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:44:25.230654   58802 main.go:141] libmachine: Using API Version  1
	I0416 17:44:25.230688   58802 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:44:25.231020   58802 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:44:25.233786   58802 out.go:177] * Stopping node "default-k8s-diff-port-304316"  ...
	I0416 17:44:25.235646   58802 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 17:44:25.235670   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:44:25.235881   58802 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 17:44:25.235903   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:44:25.238854   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:44:25.239365   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:43:26 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:44:25.239403   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:44:25.239565   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:44:25.239746   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:44:25.239904   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:44:25.240047   58802 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:44:25.347618   58802 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 17:44:25.423175   58802 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 17:44:25.497893   58802 main.go:141] libmachine: Stopping "default-k8s-diff-port-304316"...
	I0416 17:44:25.497924   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:44:25.499496   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Stop
	I0416 17:44:25.503377   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 0/120
	I0416 17:44:26.504897   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 1/120
	I0416 17:44:27.506163   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 2/120
	I0416 17:44:28.507664   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 3/120
	I0416 17:44:29.508879   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 4/120
	I0416 17:44:30.510895   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 5/120
	I0416 17:44:31.512943   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 6/120
	I0416 17:44:32.514102   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 7/120
	I0416 17:44:33.516018   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 8/120
	I0416 17:44:34.517294   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 9/120
	I0416 17:44:35.519362   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 10/120
	I0416 17:44:36.520639   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 11/120
	I0416 17:44:37.521915   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 12/120
	I0416 17:44:38.523651   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 13/120
	I0416 17:44:39.525018   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 14/120
	I0416 17:44:40.526706   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 15/120
	I0416 17:44:41.528176   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 16/120
	I0416 17:44:42.530553   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 17/120
	I0416 17:44:43.532172   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 18/120
	I0416 17:44:44.533723   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 19/120
	I0416 17:44:45.535963   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 20/120
	I0416 17:44:46.537501   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 21/120
	I0416 17:44:47.538969   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 22/120
	I0416 17:44:48.541233   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 23/120
	I0416 17:44:49.542588   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 24/120
	I0416 17:44:50.544932   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 25/120
	I0416 17:44:51.546280   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 26/120
	I0416 17:44:52.547707   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 27/120
	I0416 17:44:53.549191   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 28/120
	I0416 17:44:54.550569   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 29/120
	I0416 17:44:55.552937   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 30/120
	I0416 17:44:56.554431   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 31/120
	I0416 17:44:57.555923   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 32/120
	I0416 17:44:58.557430   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 33/120
	I0416 17:44:59.558729   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 34/120
	I0416 17:45:00.561408   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 35/120
	I0416 17:45:01.562725   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 36/120
	I0416 17:45:02.564140   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 37/120
	I0416 17:45:03.565871   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 38/120
	I0416 17:45:04.567395   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 39/120
	I0416 17:45:05.568755   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 40/120
	I0416 17:45:06.570474   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 41/120
	I0416 17:45:07.572046   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 42/120
	I0416 17:45:08.574012   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 43/120
	I0416 17:45:09.575367   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 44/120
	I0416 17:45:10.577185   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 45/120
	I0416 17:45:11.579311   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 46/120
	I0416 17:45:12.581381   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 47/120
	I0416 17:45:13.583627   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 48/120
	I0416 17:45:14.585195   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 49/120
	I0416 17:45:15.587397   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 50/120
	I0416 17:45:16.588952   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 51/120
	I0416 17:45:17.590252   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 52/120
	I0416 17:45:18.591919   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 53/120
	I0416 17:45:19.593284   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 54/120
	I0416 17:45:20.595264   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 55/120
	I0416 17:45:21.596701   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 56/120
	I0416 17:45:22.598100   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 57/120
	I0416 17:45:23.599449   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 58/120
	I0416 17:45:24.600903   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 59/120
	I0416 17:45:25.603066   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 60/120
	I0416 17:45:26.604428   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 61/120
	I0416 17:45:27.605839   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 62/120
	I0416 17:45:28.607561   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 63/120
	I0416 17:45:29.609228   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 64/120
	I0416 17:45:30.611043   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 65/120
	I0416 17:45:31.612357   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 66/120
	I0416 17:45:32.613938   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 67/120
	I0416 17:45:33.615527   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 68/120
	I0416 17:45:34.616967   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 69/120
	I0416 17:45:35.619120   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 70/120
	I0416 17:45:36.620491   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 71/120
	I0416 17:45:37.621833   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 72/120
	I0416 17:45:38.623483   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 73/120
	I0416 17:45:39.624771   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 74/120
	I0416 17:45:40.626807   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 75/120
	I0416 17:45:41.628112   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 76/120
	I0416 17:45:42.630555   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 77/120
	I0416 17:45:43.631892   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 78/120
	I0416 17:45:44.633316   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 79/120
	I0416 17:45:45.635201   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 80/120
	I0416 17:45:46.636544   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 81/120
	I0416 17:45:47.637966   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 82/120
	I0416 17:45:48.639284   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 83/120
	I0416 17:45:49.640785   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 84/120
	I0416 17:45:50.642926   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 85/120
	I0416 17:45:51.644153   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 86/120
	I0416 17:45:52.645730   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 87/120
	I0416 17:45:53.646958   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 88/120
	I0416 17:45:54.648465   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 89/120
	I0416 17:45:55.650671   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 90/120
	I0416 17:45:56.652185   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 91/120
	I0416 17:45:57.654110   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 92/120
	I0416 17:45:58.655503   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 93/120
	I0416 17:45:59.656916   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 94/120
	I0416 17:46:00.658962   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 95/120
	I0416 17:46:01.661078   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 96/120
	I0416 17:46:02.663366   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 97/120
	I0416 17:46:03.664763   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 98/120
	I0416 17:46:04.666059   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 99/120
	I0416 17:46:05.667938   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 100/120
	I0416 17:46:06.669820   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 101/120
	I0416 17:46:07.671482   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 102/120
	I0416 17:46:08.672877   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 103/120
	I0416 17:46:09.674406   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 104/120
	I0416 17:46:10.675742   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 105/120
	I0416 17:46:11.677078   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 106/120
	I0416 17:46:12.679221   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 107/120
	I0416 17:46:13.680518   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 108/120
	I0416 17:46:14.682633   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 109/120
	I0416 17:46:15.684693   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 110/120
	I0416 17:46:16.686225   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 111/120
	I0416 17:46:17.687666   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 112/120
	I0416 17:46:18.689097   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 113/120
	I0416 17:46:19.691329   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 114/120
	I0416 17:46:20.693200   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 115/120
	I0416 17:46:21.695375   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 116/120
	I0416 17:46:22.696612   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 117/120
	I0416 17:46:23.698562   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 118/120
	I0416 17:46:24.699705   58802 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for machine to stop 119/120
	I0416 17:46:25.700865   58802 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 17:46:25.700914   58802 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 17:46:25.702790   58802 out.go:177] 
	W0416 17:46:25.704217   58802 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 17:46:25.704234   58802 out.go:239] * 
	* 
	W0416 17:46:25.707685   58802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 17:46:25.708928   58802 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-304316 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316: exit status 3 (18.610786798s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:46:44.321150   59239 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0416 17:46:44.321172   59239 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304316" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316: exit status 3 (3.20407836s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:46:47.525131   59335 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0416 17:46:47.525152   59335 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-304316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-304316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.14891155s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-304316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316: exit status 3 (3.062662402s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 17:46:56.737274   59415 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0416 17:46:56.737301   59415 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-304316" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (331.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:52:03.889672   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
E0416 17:52:10.030647   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.168:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (234.901667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-795352" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-795352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-795352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.91µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-795352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (240.132881ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-795352 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:30 UTC | 16 Apr 24 17:31 UTC |
	|         | --memory=2048                                          |                              |         |                |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p cert-expiration-235607                              | cert-expiration-235607       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	| delete  | -p                                                     | disable-driver-mounts-376814 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:31 UTC |
	|         | disable-driver-mounts-376814                           |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-368813                  | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-512869                 | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-368813                                   | no-preload-368813            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| start   | -p embed-certs-512869                                  | embed-certs-512869           | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:31 UTC | 16 Apr 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:35 UTC |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:35 UTC | 16 Apr 24 17:37 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:37 UTC | 16 Apr 24 17:38 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-633875                           | kubernetes-upgrade-633875    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:38 UTC | 16 Apr 24 17:38 UTC |
	| start   | -p stopped-upgrade-446675                              | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:38 UTC | 16 Apr 24 17:39 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	| stop    | stopped-upgrade-446675 stop                            | minikube                     | jenkins | v1.26.0        | 16 Apr 24 17:39 UTC | 16 Apr 24 17:39 UTC |
	| start   | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:39 UTC | 16 Apr 24 17:40 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p stopped-upgrade-446675                              | stopped-upgrade-446675       | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:40 UTC |
	| start   | -p pause-970622 --memory=2048                          | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:40 UTC | 16 Apr 24 17:42 UTC |
	|         | --install-addons=false                                 |                              |         |                |                     |                     |
	|         | --wait=all --driver=kvm2                               |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| start   | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:42 UTC | 16 Apr 24 17:43 UTC |
	|         | --alsologtostderr                                      |                              |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p pause-970622                                        | pause-970622                 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:43 UTC | 16 Apr 24 17:44 UTC |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-304316  | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC | 16 Apr 24 17:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:44 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-304316       | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-304316 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:46 UTC |                     |
	|         | default-k8s-diff-port-304316                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:46:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:46:56.791301   59445 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:46:56.791849   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.791869   59445 out.go:304] Setting ErrFile to fd 2...
	I0416 17:46:56.791877   59445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:46:56.792352   59445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:46:56.793181   59445 out.go:298] Setting JSON to false
	I0416 17:46:56.794302   59445 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5369,"bootTime":1713284248,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:46:56.794364   59445 start.go:139] virtualization: kvm guest
	I0416 17:46:56.796934   59445 out.go:177] * [default-k8s-diff-port-304316] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:46:56.798418   59445 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:46:56.798451   59445 notify.go:220] Checking for updates...
	I0416 17:46:56.799763   59445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:46:56.801294   59445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:46:56.802621   59445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:46:56.803945   59445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:46:56.805309   59445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:46:56.807263   59445 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:46:56.807849   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.807910   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.822814   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0416 17:46:56.823221   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.823677   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.823699   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.823980   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.824113   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.824309   59445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:46:56.824572   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.824603   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.839091   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0416 17:46:56.839441   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.839889   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.839915   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.840218   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.840429   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.875588   59445 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 17:46:56.876934   59445 start.go:297] selected driver: kvm2
	I0416 17:46:56.876949   59445 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.877057   59445 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:46:56.877720   59445 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.877855   59445 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:46:56.891935   59445 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:46:56.892284   59445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:46:56.892355   59445 cni.go:84] Creating CNI manager for ""
	I0416 17:46:56.892367   59445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:46:56.892408   59445 start.go:340] cluster config:
	{Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:46:56.892493   59445 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:46:56.894869   59445 out.go:177] * Starting "default-k8s-diff-port-304316" primary control-plane node in "default-k8s-diff-port-304316" cluster
	I0416 17:46:56.896238   59445 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:46:56.896274   59445 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:46:56.896292   59445 cache.go:56] Caching tarball of preloaded images
	I0416 17:46:56.896377   59445 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:46:56.896392   59445 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:46:56.896522   59445 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/config.json ...
	I0416 17:46:56.896735   59445 start.go:360] acquireMachinesLock for default-k8s-diff-port-304316: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:46:56.896788   59445 start.go:364] duration metric: took 28.964µs to acquireMachinesLock for "default-k8s-diff-port-304316"
	I0416 17:46:56.896810   59445 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:46:56.896824   59445 fix.go:54] fixHost starting: 
	I0416 17:46:56.897218   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:46:56.897257   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:46:56.910980   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0416 17:46:56.911374   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:46:56.911838   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:46:56.911861   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:46:56.912201   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:46:56.912387   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.912575   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:46:56.914179   59445 fix.go:112] recreateIfNeeded on default-k8s-diff-port-304316: state=Running err=<nil>
	W0416 17:46:56.914196   59445 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:46:56.916138   59445 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-304316" VM ...
	I0416 17:46:56.917401   59445 machine.go:94] provisionDockerMachine start ...
	I0416 17:46:56.917423   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:46:56.917604   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:46:56.919801   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920180   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:43:26 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:46:56.920217   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:46:56.920347   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:46:56.920540   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920688   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:46:56.920819   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:46:56.920959   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:46:56.921119   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:46:56.921129   59445 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:46:59.809186   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:02.881077   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:08.961238   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:12.033053   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:18.113089   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:21.185113   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:30.305165   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:33.377208   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:39.457128   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:42.529153   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:48.609097   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:51.685040   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:47:57.761077   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:00.833230   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:06.913045   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:09.985120   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:16.065075   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:19.141101   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:25.221118   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:28.289135   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:34.369068   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:37.445091   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:43.521090   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:46.593167   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:52.673093   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:48:55.745116   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:01.825195   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:04.897276   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:10.977087   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:14.049089   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:20.129139   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:23.201163   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:29.281110   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:32.353103   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:38.433052   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:41.505072   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:47.585081   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:50.657107   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:56.737202   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:49:59.809144   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:05.889152   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:08.965116   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:15.041030   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:18.117063   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:24.193083   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:27.265045   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:33.345075   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:36.417221   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:42.497055   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:45.573055   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:51.649098   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:50:54.725050   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:00.801050   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:03.877070   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:09.953093   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:13.025097   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:19.105086   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:22.181078   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:28.257048   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:31.329098   59445 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.6:22: connect: no route to host
	I0416 17:51:34.330002   59445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:51:34.330042   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetMachineName
	I0416 17:51:34.330396   59445 buildroot.go:166] provisioning hostname "default-k8s-diff-port-304316"
	I0416 17:51:34.330426   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetMachineName
	I0416 17:51:34.330626   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:51:34.332284   59445 machine.go:97] duration metric: took 4m37.414864045s to provisionDockerMachine
	I0416 17:51:34.332323   59445 fix.go:56] duration metric: took 4m37.435507413s for fixHost
	I0416 17:51:34.332328   59445 start.go:83] releasing machines lock for "default-k8s-diff-port-304316", held for 4m37.435525911s
	W0416 17:51:34.332343   59445 start.go:713] error starting host: provision: host is not running
	W0416 17:51:34.332440   59445 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0416 17:51:34.332449   59445 start.go:728] Will try again in 5 seconds ...
	I0416 17:51:39.334913   59445 start.go:360] acquireMachinesLock for default-k8s-diff-port-304316: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:51:39.335039   59445 start.go:364] duration metric: took 73.423µs to acquireMachinesLock for "default-k8s-diff-port-304316"
	I0416 17:51:39.335084   59445 start.go:96] Skipping create...Using existing machine configuration
	I0416 17:51:39.335095   59445 fix.go:54] fixHost starting: 
	I0416 17:51:39.335490   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:51:39.335518   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:51:39.350776   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I0416 17:51:39.351247   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:51:39.351746   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:51:39.351775   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:51:39.352147   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:51:39.352340   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:51:39.352517   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:51:39.354353   59445 fix.go:112] recreateIfNeeded on default-k8s-diff-port-304316: state=Stopped err=<nil>
	I0416 17:51:39.354377   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	W0416 17:51:39.354535   59445 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 17:51:39.357325   59445 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-304316" ...
	I0416 17:51:39.358626   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Start
	I0416 17:51:39.358786   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Ensuring networks are active...
	I0416 17:51:39.359442   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Ensuring network default is active
	I0416 17:51:39.359718   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Ensuring network mk-default-k8s-diff-port-304316 is active
	I0416 17:51:39.360090   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Getting domain xml...
	I0416 17:51:39.360703   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Creating domain...
	I0416 17:51:40.590472   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting to get IP...
	I0416 17:51:40.591393   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:40.591813   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:40.591885   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:40.591796   60828 retry.go:31] will retry after 233.207176ms: waiting for machine to come up
	I0416 17:51:40.826327   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:40.826787   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:40.826817   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:40.826754   60828 retry.go:31] will retry after 291.059126ms: waiting for machine to come up
	I0416 17:51:41.119227   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:41.119664   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:41.119686   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:41.119610   60828 retry.go:31] will retry after 445.747776ms: waiting for machine to come up
	I0416 17:51:41.567027   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:41.567571   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:41.567593   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:41.567530   60828 retry.go:31] will retry after 424.055171ms: waiting for machine to come up
	I0416 17:51:41.992713   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:41.993297   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:41.993327   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:41.993258   60828 retry.go:31] will retry after 524.260292ms: waiting for machine to come up
	I0416 17:51:42.518764   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:42.519309   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:42.519348   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:42.519249   60828 retry.go:31] will retry after 859.148151ms: waiting for machine to come up
	I0416 17:51:43.379499   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:43.380066   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:43.380096   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:43.380002   60828 retry.go:31] will retry after 919.110357ms: waiting for machine to come up
	I0416 17:51:44.300609   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:44.301185   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:44.301208   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:44.301142   60828 retry.go:31] will retry after 1.124773922s: waiting for machine to come up
	I0416 17:51:45.427940   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:45.428503   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:45.428536   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:45.428454   60828 retry.go:31] will retry after 1.392501549s: waiting for machine to come up
	I0416 17:51:46.822985   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:46.823524   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:46.823557   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:46.823485   60828 retry.go:31] will retry after 1.467109176s: waiting for machine to come up
	I0416 17:51:48.291866   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:48.292336   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:48.292381   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:48.292266   60828 retry.go:31] will retry after 2.18922176s: waiting for machine to come up
	I0416 17:51:50.483101   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:50.483599   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:50.483631   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:50.483560   60828 retry.go:31] will retry after 3.178848437s: waiting for machine to come up
	I0416 17:51:53.663556   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:53.664031   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:53.664061   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:53.663985   60828 retry.go:31] will retry after 3.107354862s: waiting for machine to come up
	I0416 17:51:56.772590   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:51:56.773065   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | unable to find current IP address of domain default-k8s-diff-port-304316 in network mk-default-k8s-diff-port-304316
	I0416 17:51:56.773093   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | I0416 17:51:56.773003   60828 retry.go:31] will retry after 4.4106867s: waiting for machine to come up
	I0416 17:52:01.184829   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.185325   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Found IP for machine: 192.168.39.6
	I0416 17:52:01.185357   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has current primary IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.185366   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Reserving static IP address...
	I0416 17:52:01.185776   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-304316", mac: "52:54:00:c6:a7:9f", ip: "192.168.39.6"} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.185834   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | skip adding static IP to network mk-default-k8s-diff-port-304316 - found existing host DHCP lease matching {name: "default-k8s-diff-port-304316", mac: "52:54:00:c6:a7:9f", ip: "192.168.39.6"}
	I0416 17:52:01.185852   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Getting to WaitForSSH function...
	I0416 17:52:01.185864   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Reserved static IP address: 192.168.39.6
	I0416 17:52:01.185882   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Waiting for SSH to be available...
	I0416 17:52:01.188182   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.188517   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.188553   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.188592   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Using SSH client type: external
	I0416 17:52:01.188617   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa (-rw-------)
	I0416 17:52:01.188665   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:52:01.188687   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | About to run SSH command:
	I0416 17:52:01.188711   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | exit 0
	I0416 17:52:01.317190   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | SSH cmd err, output: <nil>: 
	I0416 17:52:01.317531   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetConfigRaw
	I0416 17:52:01.318229   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetIP
	I0416 17:52:01.320829   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.321178   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.321209   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.321487   59445 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/config.json ...
	I0416 17:52:01.321756   59445 machine.go:94] provisionDockerMachine start ...
	I0416 17:52:01.321778   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:52:01.321975   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:01.324107   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.324421   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.324458   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.324547   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:01.324721   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.324951   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.325109   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:01.325330   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:52:01.325499   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:52:01.325511   59445 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 17:52:01.437514   59445 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 17:52:01.437543   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetMachineName
	I0416 17:52:01.437839   59445 buildroot.go:166] provisioning hostname "default-k8s-diff-port-304316"
	I0416 17:52:01.437868   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetMachineName
	I0416 17:52:01.438044   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:01.440792   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.441102   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.441140   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.441262   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:01.441431   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.441580   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.441711   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:01.441889   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:52:01.442094   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:52:01.442116   59445 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-304316 && echo "default-k8s-diff-port-304316" | sudo tee /etc/hostname
	I0416 17:52:01.571497   59445 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-304316
	
	I0416 17:52:01.571533   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:01.574350   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.574703   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.574743   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.574898   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:01.575077   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.575258   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.575402   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:01.575595   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:52:01.575795   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:52:01.575813   59445 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-304316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-304316/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-304316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:52:01.700750   59445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:52:01.700786   59445 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:52:01.700828   59445 buildroot.go:174] setting up certificates
	I0416 17:52:01.700861   59445 provision.go:84] configureAuth start
	I0416 17:52:01.700879   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetMachineName
	I0416 17:52:01.701175   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetIP
	I0416 17:52:01.704146   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.704657   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.704679   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.704889   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:01.707269   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.707601   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.707629   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.707777   59445 provision.go:143] copyHostCerts
	I0416 17:52:01.707852   59445 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:52:01.707873   59445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:52:01.707941   59445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:52:01.708033   59445 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:52:01.708042   59445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:52:01.708065   59445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:52:01.708117   59445 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:52:01.708124   59445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:52:01.708143   59445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:52:01.708195   59445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-304316 san=[127.0.0.1 192.168.39.6 default-k8s-diff-port-304316 localhost minikube]
	I0416 17:52:01.772074   59445 provision.go:177] copyRemoteCerts
	I0416 17:52:01.772132   59445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:52:01.772151   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:01.774877   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.775180   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.775203   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.775369   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:01.775543   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.775684   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:01.775799   59445 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:52:01.859179   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0416 17:52:01.885536   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:52:01.912992   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:52:01.940464   59445 provision.go:87] duration metric: took 239.586146ms to configureAuth
	I0416 17:52:01.940492   59445 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:52:01.940646   59445 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:52:01.940712   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:01.943130   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.943480   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:01.943512   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:01.943640   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:01.943845   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.944015   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:01.944173   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:01.944306   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:52:01.944512   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:52:01.944537   59445 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:52:02.228712   59445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:52:02.228742   59445 machine.go:97] duration metric: took 906.971855ms to provisionDockerMachine
	I0416 17:52:02.228760   59445 start.go:293] postStartSetup for "default-k8s-diff-port-304316" (driver="kvm2")
	I0416 17:52:02.228774   59445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:52:02.228802   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:52:02.229147   59445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:52:02.229181   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:02.231791   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.232140   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:02.232177   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.232317   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:02.232521   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:02.232713   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:02.232942   59445 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:52:02.320711   59445 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:52:02.325644   59445 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:52:02.325667   59445 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:52:02.325741   59445 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:52:02.325853   59445 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:52:02.325975   59445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:52:02.336171   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:52:02.364002   59445 start.go:296] duration metric: took 135.229428ms for postStartSetup
	I0416 17:52:02.364051   59445 fix.go:56] duration metric: took 23.028951179s for fixHost
	I0416 17:52:02.364076   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:02.366857   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.367207   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:02.367250   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.367382   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:02.367564   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:02.367712   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:02.367839   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:02.367977   59445 main.go:141] libmachine: Using SSH client type: native
	I0416 17:52:02.368154   59445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0416 17:52:02.368166   59445 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:52:02.482289   59445 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713289922.454351257
	
	I0416 17:52:02.482313   59445 fix.go:216] guest clock: 1713289922.454351257
	I0416 17:52:02.482320   59445 fix.go:229] Guest: 2024-04-16 17:52:02.454351257 +0000 UTC Remote: 2024-04-16 17:52:02.364056998 +0000 UTC m=+305.618934968 (delta=90.294259ms)
	I0416 17:52:02.482337   59445 fix.go:200] guest clock delta is within tolerance: 90.294259ms
	I0416 17:52:02.482342   59445 start.go:83] releasing machines lock for "default-k8s-diff-port-304316", held for 23.147274645s
	I0416 17:52:02.482359   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:52:02.482651   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetIP
	I0416 17:52:02.485297   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.485681   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:02.485707   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.485894   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:52:02.486425   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:52:02.486604   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:52:02.486683   59445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:52:02.486721   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:02.486833   59445 ssh_runner.go:195] Run: cat /version.json
	I0416 17:52:02.486856   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:52:02.489368   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.489626   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.489750   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:02.489778   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.489900   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:02.490047   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:02.490060   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:02.490094   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:02.490158   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:02.490251   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:52:02.490397   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:52:02.490386   59445 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:52:02.490530   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:52:02.490695   59445 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:52:02.570564   59445 ssh_runner.go:195] Run: systemctl --version
	I0416 17:52:02.598264   59445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:52:02.749609   59445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:52:02.756385   59445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:52:02.756440   59445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:52:02.775699   59445 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:52:02.775717   59445 start.go:494] detecting cgroup driver to use...
	I0416 17:52:02.775772   59445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:52:02.794930   59445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:52:02.810114   59445 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:52:02.810160   59445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:52:02.825858   59445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:52:02.840344   59445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:52:02.961598   59445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:52:03.105179   59445 docker.go:233] disabling docker service ...
	I0416 17:52:03.105248   59445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:52:03.123857   59445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:52:03.141254   59445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:52:03.289390   59445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:52:03.413738   59445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:52:03.430113   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:52:03.452007   59445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:52:03.452071   59445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:52:03.464781   59445 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:52:03.464833   59445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:52:03.478108   59445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:52:03.489657   59445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:52:03.502499   59445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:52:03.515332   59445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:52:03.528019   59445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:52:03.549873   59445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:52:03.564478   59445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:52:03.579006   59445 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:52:03.579070   59445 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:52:03.596086   59445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:52:03.607746   59445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:52:03.735083   59445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:52:03.891081   59445 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:52:03.891152   59445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:52:03.896625   59445 start.go:562] Will wait 60s for crictl version
	I0416 17:52:03.896672   59445 ssh_runner.go:195] Run: which crictl
	I0416 17:52:03.900660   59445 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:52:03.940682   59445 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:52:03.940767   59445 ssh_runner.go:195] Run: crio --version
	I0416 17:52:03.971243   59445 ssh_runner.go:195] Run: crio --version
	I0416 17:52:04.004074   59445 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:52:04.005433   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetIP
	I0416 17:52:04.007938   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:04.008265   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:52:04.008283   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:52:04.008459   59445 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 17:52:04.012747   59445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:52:04.027265   59445 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:52:04.027370   59445 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:52:04.027409   59445 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:52:04.069888   59445 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 17:52:04.069968   59445 ssh_runner.go:195] Run: which lz4
	I0416 17:52:04.074960   59445 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 17:52:04.079537   59445 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:52:04.079564   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 17:52:05.803786   59445 crio.go:462] duration metric: took 1.728849733s to copy over tarball
	I0416 17:52:05.803890   59445 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:52:08.492309   59445 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.688363261s)
	I0416 17:52:08.492334   59445 crio.go:469] duration metric: took 2.68852058s to extract the tarball
	I0416 17:52:08.492341   59445 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:52:08.534514   59445 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:52:08.586711   59445 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:52:08.586735   59445 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:52:08.586745   59445 kubeadm.go:928] updating node { 192.168.39.6 8444 v1.29.3 crio true true} ...
	I0416 17:52:08.586868   59445 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-304316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 17:52:08.587030   59445 ssh_runner.go:195] Run: crio config
	I0416 17:52:08.638383   59445 cni.go:84] Creating CNI manager for ""
	I0416 17:52:08.638423   59445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:52:08.638444   59445 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:52:08.638546   59445 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-304316 NodeName:default-k8s-diff-port-304316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:52:08.638955   59445 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-304316"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:52:08.639124   59445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:52:08.651632   59445 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:52:08.651694   59445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:52:08.661898   59445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0416 17:52:08.682615   59445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:52:08.701367   59445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 17:52:08.720192   59445 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0416 17:52:08.724480   59445 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:52:08.737691   59445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:52:08.861699   59445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:52:08.880124   59445 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316 for IP: 192.168.39.6
	I0416 17:52:08.880147   59445 certs.go:194] generating shared ca certs ...
	I0416 17:52:08.880171   59445 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:52:08.880334   59445 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:52:08.880390   59445 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:52:08.880406   59445 certs.go:256] generating profile certs ...
	I0416 17:52:08.880517   59445 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/client.key
	I0416 17:52:08.880669   59445 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/apiserver.key.260d72ce
	I0416 17:52:08.880832   59445 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/proxy-client.key
	I0416 17:52:08.881040   59445 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:52:08.881084   59445 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:52:08.881098   59445 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:52:08.881128   59445 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:52:08.881172   59445 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:52:08.881202   59445 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:52:08.881257   59445 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:52:08.882082   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:52:08.927398   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:52:08.964468   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:52:08.995906   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:52:09.029625   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 17:52:09.076903   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:52:09.108357   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:52:09.136065   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/default-k8s-diff-port-304316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:52:09.163546   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:52:09.190885   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:52:09.220054   59445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:52:09.247254   59445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:52:09.267784   59445 ssh_runner.go:195] Run: openssl version
	I0416 17:52:09.274743   59445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:52:09.288760   59445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:52:09.294085   59445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:52:09.294130   59445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:52:09.300523   59445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:52:09.314152   59445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:52:09.328508   59445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:52:09.333626   59445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:52:09.333681   59445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:52:09.339950   59445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:52:09.353434   59445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:52:09.366453   59445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:52:09.371598   59445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:52:09.371645   59445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:52:09.380345   59445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:52:09.394292   59445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:52:09.399542   59445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 17:52:09.406180   59445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 17:52:09.412964   59445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 17:52:09.419567   59445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 17:52:09.425647   59445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 17:52:09.431867   59445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 17:52:09.438184   59445 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-304316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-304316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:52:09.438301   59445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:52:09.438348   59445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:52:09.481173   59445 cri.go:89] found id: ""
	I0416 17:52:09.481262   59445 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 17:52:09.493507   59445 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 17:52:09.493526   59445 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 17:52:09.493530   59445 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 17:52:09.493598   59445 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 17:52:09.504619   59445 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:52:09.505718   59445 kubeconfig.go:125] found "default-k8s-diff-port-304316" server: "https://192.168.39.6:8444"
	I0416 17:52:09.507996   59445 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 17:52:09.518838   59445 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.6
	I0416 17:52:09.518864   59445 kubeadm.go:1154] stopping kube-system containers ...
	I0416 17:52:09.518874   59445 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 17:52:09.518930   59445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:52:09.567039   59445 cri.go:89] found id: ""
	I0416 17:52:09.567128   59445 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 17:52:09.587616   59445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:52:09.600438   59445 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:52:09.600458   59445 kubeadm.go:156] found existing configuration files:
	
	I0416 17:52:09.600516   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 17:52:09.611765   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:52:09.611829   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:52:09.623759   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 17:52:09.634253   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:52:09.634326   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:52:09.645834   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 17:52:09.656587   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:52:09.656626   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:52:09.668742   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 17:52:09.680323   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:52:09.680400   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:52:09.691869   59445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:52:09.703486   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:52:09.819932   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:52:10.865678   59445 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.045704125s)
	I0416 17:52:10.865716   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:52:11.104593   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:52:11.196183   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:52:11.307154   59445 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:52:11.307236   59445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:52:11.807926   59445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:52:12.308052   59445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:52:12.383454   59445 api_server.go:72] duration metric: took 1.076300487s to wait for apiserver process to appear ...
	I0416 17:52:12.383478   59445 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:52:12.383525   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:52:12.384185   59445 api_server.go:269] stopped: https://192.168.39.6:8444/healthz: Get "https://192.168.39.6:8444/healthz": dial tcp 192.168.39.6:8444: connect: connection refused
	I0416 17:52:12.883876   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:52:15.504707   59445 api_server.go:279] https://192.168.39.6:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:52:15.504739   59445 api_server.go:103] status: https://192.168.39.6:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:52:15.504752   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:52:15.548369   59445 api_server.go:279] https://192.168.39.6:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 17:52:15.548403   59445 api_server.go:103] status: https://192.168.39.6:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 17:52:15.883767   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:52:15.888524   59445 api_server.go:279] https://192.168.39.6:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 17:52:15.888558   59445 api_server.go:103] status: https://192.168.39.6:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 17:52:16.383813   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:52:16.388875   59445 api_server.go:279] https://192.168.39.6:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 17:52:16.388910   59445 api_server.go:103] status: https://192.168.39.6:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 17:52:16.884291   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:52:16.892882   59445 api_server.go:279] https://192.168.39.6:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 17:52:16.892918   59445 api_server.go:103] status: https://192.168.39.6:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 17:52:17.384173   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:52:17.389969   59445 api_server.go:279] https://192.168.39.6:8444/healthz returned 200:
	ok
	I0416 17:52:17.405509   59445 api_server.go:141] control plane version: v1.29.3
	I0416 17:52:17.405534   59445 api_server.go:131] duration metric: took 5.022050511s to wait for apiserver health ...
	I0416 17:52:17.405542   59445 cni.go:84] Creating CNI manager for ""
	I0416 17:52:17.405549   59445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:52:17.407076   59445 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 17:52:17.408316   59445 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 17:52:17.462959   59445 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 17:52:17.497844   59445 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:52:17.516492   59445 system_pods.go:59] 8 kube-system pods found
	I0416 17:52:17.516521   59445 system_pods.go:61] "coredns-76f75df574-bwpbw" [6bc84298-1a4f-4690-a4fd-de2514cde554] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 17:52:17.516529   59445 system_pods.go:61] "etcd-default-k8s-diff-port-304316" [833ab873-0628-4233-a7bd-f9889329e54c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 17:52:17.516535   59445 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-304316" [d63b77e3-3d5f-4381-b892-218ff5c8b81c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 17:52:17.516542   59445 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-304316" [1ca30152-bc07-4fe8-ae71-50956ee3a6df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 17:52:17.516546   59445 system_pods.go:61] "kube-proxy-t44hs" [c0661436-069b-43f0-addb-560fe32b4543] Running
	I0416 17:52:17.516555   59445 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-304316" [f082f62e-a703-42e6-aa41-2c8759188226] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 17:52:17.516562   59445 system_pods.go:61] "metrics-server-57f55c9bc5-rs6mm" [5d7f374f-e34c-4e49-85c7-7ab8010e1693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 17:52:17.516566   59445 system_pods.go:61] "storage-provisioner" [c2716030-9e79-4414-a3ef-4da79c2feafd] Running
	I0416 17:52:17.516580   59445 system_pods.go:74] duration metric: took 18.715156ms to wait for pod list to return data ...
	I0416 17:52:17.516587   59445 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:52:17.520628   59445 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:52:17.520650   59445 node_conditions.go:123] node cpu capacity is 2
	I0416 17:52:17.520663   59445 node_conditions.go:105] duration metric: took 4.068971ms to run NodePressure ...
	I0416 17:52:17.520681   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 17:52:17.922012   59445 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 17:52:17.930614   59445 kubeadm.go:733] kubelet initialised
	I0416 17:52:17.930635   59445 kubeadm.go:734] duration metric: took 8.597101ms waiting for restarted kubelet to initialise ...
	I0416 17:52:17.930643   59445 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:52:17.936995   59445 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-bwpbw" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:19.944954   59445 pod_ready.go:102] pod "coredns-76f75df574-bwpbw" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:22.445216   59445 pod_ready.go:102] pod "coredns-76f75df574-bwpbw" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:24.445589   59445 pod_ready.go:102] pod "coredns-76f75df574-bwpbw" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:26.945213   59445 pod_ready.go:102] pod "coredns-76f75df574-bwpbw" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:28.945479   59445 pod_ready.go:102] pod "coredns-76f75df574-bwpbw" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:29.944620   59445 pod_ready.go:92] pod "coredns-76f75df574-bwpbw" in "kube-system" namespace has status "Ready":"True"
	I0416 17:52:29.944646   59445 pod_ready.go:81] duration metric: took 12.00762056s for pod "coredns-76f75df574-bwpbw" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.944657   59445 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.950021   59445 pod_ready.go:92] pod "etcd-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:52:29.950038   59445 pod_ready.go:81] duration metric: took 5.372518ms for pod "etcd-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.950047   59445 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.954742   59445 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:52:29.954760   59445 pod_ready.go:81] duration metric: took 4.706478ms for pod "kube-apiserver-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.954767   59445 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.959221   59445 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:52:29.959239   59445 pod_ready.go:81] duration metric: took 4.464991ms for pod "kube-controller-manager-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.959249   59445 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t44hs" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.964088   59445 pod_ready.go:92] pod "kube-proxy-t44hs" in "kube-system" namespace has status "Ready":"True"
	I0416 17:52:29.964111   59445 pod_ready.go:81] duration metric: took 4.85504ms for pod "kube-proxy-t44hs" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:29.964121   59445 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:31.541937   59445 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:52:31.541962   59445 pod_ready.go:81] duration metric: took 1.577833769s for pod "kube-scheduler-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:31.541972   59445 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace to be "Ready" ...
	I0416 17:52:33.552785   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:36.049579   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:38.049786   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:40.549801   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:43.049576   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:45.052615   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:47.549557   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:52:49.549675   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.275646840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289976275621061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=220a1cb6-205b-466d-ae52-f886a61354c0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.276194352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=373da296-864a-414a-8dae-3b5204d7efa7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.276251576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=373da296-864a-414a-8dae-3b5204d7efa7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.276284667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=373da296-864a-414a-8dae-3b5204d7efa7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.311684199Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7b037af-3674-47ab-90a1-785fdac97cd6 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.311751245Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7b037af-3674-47ab-90a1-785fdac97cd6 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.314253353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5214a4c-eac4-4336-923c-2cdc60ea3794 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.314829369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289976314803922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5214a4c-eac4-4336-923c-2cdc60ea3794 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.315760474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29feb7e1-b111-455c-87e7-aa339e07e2b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.315835640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29feb7e1-b111-455c-87e7-aa339e07e2b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.315872270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=29feb7e1-b111-455c-87e7-aa339e07e2b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.353074355Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c8003f1-b69a-4585-9f38-6a42a1901ab6 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.353184755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c8003f1-b69a-4585-9f38-6a42a1901ab6 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.354955573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a79b4d5-a8f2-4753-a92f-7fd7fb44547c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.355426630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289976355389745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a79b4d5-a8f2-4753-a92f-7fd7fb44547c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.356037062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65c57de9-7670-41d6-acde-e1d447cc2643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.356093136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65c57de9-7670-41d6-acde-e1d447cc2643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.356129025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=65c57de9-7670-41d6-acde-e1d447cc2643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.396132124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eeb7f6a1-ad97-4249-a190-66d1049fa84e name=/runtime.v1.RuntimeService/Version
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.396324685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eeb7f6a1-ad97-4249-a190-66d1049fa84e name=/runtime.v1.RuntimeService/Version
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.398296808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd20d6f0-9746-4c86-b5ab-47bfc92be2be name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.398930981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713289976398894837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd20d6f0-9746-4c86-b5ab-47bfc92be2be name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.399969698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e73579a-24ed-420e-bd4a-33fbcd0725b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.400078196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e73579a-24ed-420e-bd4a-33fbcd0725b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:52:56 old-k8s-version-795352 crio[644]: time="2024-04-16 17:52:56.400131345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5e73579a-24ed-420e-bd4a-33fbcd0725b9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr16 17:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052410] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043185] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.634593] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr16 17:30] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.551816] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.379764] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.063374] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074890] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.187967] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.157390] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.275173] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.631339] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.063173] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.875212] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +14.495355] kauditd_printk_skb: 46 callbacks suppressed
	[Apr16 17:34] systemd-fstab-generator[5060]: Ignoring "noauto" option for root device
	[Apr16 17:36] systemd-fstab-generator[5351]: Ignoring "noauto" option for root device
	[  +0.068573] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 17:52:56 up 23 min,  0 users,  load average: 0.01, 0.02, 0.02
	Linux old-k8s-version-795352 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000211560, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000765380, 0x24, 0x0, ...)
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]: net.(*Dialer).DialContext(0xc0001e39e0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000765380, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000a5dea0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000765380, 0x24, 0x60, 0x7f5e84a97658, 0x118, ...)
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]: net/http.(*Transport).dial(0xc000a30000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000765380, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]: net/http.(*Transport).dialConn(0xc000a30000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0007d6600, 0x5, 0xc000765380, 0x24, 0x0, 0xc00017d320, ...)
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]: net/http.(*Transport).dialConnFor(0xc000a30000, 0xc000be3ce0)
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]: created by net/http.(*Transport).queueForDial
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7143]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 16 17:52:51 old-k8s-version-795352 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 16 17:52:51 old-k8s-version-795352 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 16 17:52:51 old-k8s-version-795352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 170.
	Apr 16 17:52:51 old-k8s-version-795352 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 16 17:52:51 old-k8s-version-795352 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7152]: I0416 17:52:51.819685    7152 server.go:416] Version: v1.20.0
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7152]: I0416 17:52:51.820063    7152 server.go:837] Client rotation is on, will bootstrap in background
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7152]: I0416 17:52:51.822899    7152 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7152]: W0416 17:52:51.824388    7152 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 16 17:52:51 old-k8s-version-795352 kubelet[7152]: I0416 17:52:51.824661    7152 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 2 (253.690485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-795352" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (331.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (413.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512869 -n embed-certs-512869
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-16 17:57:45.70542667 +0000 UTC m=+5901.104103183
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-512869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-512869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.548µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-512869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-512869 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-512869 logs -n 25: (1.490700594s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p auto-726705 sudo journalctl                       | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | -xeu kubelet --all --full                            |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | status docker --all --full                           |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat docker --no-pager                                |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo docker                           | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | system info                                          |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo                                  | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | status containerd --all --full                       |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat containerd --no-pager                            |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo containerd                       | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | config dump                                          |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | status crio --all --full                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat crio --no-pager                                  |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo find                             | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo crio                             | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | config                                               |                           |         |                |                     |                     |
	| delete  | -p auto-726705                                       | auto-726705               | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	| start   | -p flannel-726705                                    | flannel-726705            | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:57 UTC |
	|         | --memory=3072                                        |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |                |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| delete  | -p no-preload-368813                                 | no-preload-368813         | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	| start   | -p enable-default-cni-726705                         | enable-default-cni-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | --memory=3072                                        |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |                |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |                |                     |                     |
	|         | --driver=kvm2                                        |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:56:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:56:58.219429   65026 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:56:58.219524   65026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:56:58.219533   65026 out.go:304] Setting ErrFile to fd 2...
	I0416 17:56:58.219537   65026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:56:58.219709   65026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:56:58.220275   65026 out.go:298] Setting JSON to false
	I0416 17:56:58.221220   65026 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5970,"bootTime":1713284248,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:56:58.221280   65026 start.go:139] virtualization: kvm guest
	I0416 17:56:58.223522   65026 out.go:177] * [enable-default-cni-726705] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:56:58.224908   65026 notify.go:220] Checking for updates...
	I0416 17:56:58.224932   65026 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:56:58.226238   65026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:56:58.227452   65026 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:56:58.228782   65026 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:56:58.229914   65026 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:56:58.231074   65026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:56:58.232861   65026 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:56:58.233014   65026 config.go:182] Loaded profile config "embed-certs-512869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:56:58.233139   65026 config.go:182] Loaded profile config "flannel-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:56:58.233238   65026 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:56:58.269641   65026 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:56:58.270929   65026 start.go:297] selected driver: kvm2
	I0416 17:56:58.270943   65026 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:56:58.270956   65026 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:56:58.271960   65026 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:56:58.272043   65026 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:56:58.287201   65026 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:56:58.287246   65026 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0416 17:56:58.287443   65026 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0416 17:56:58.287466   65026 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:56:58.287529   65026 cni.go:84] Creating CNI manager for "bridge"
	I0416 17:56:58.287541   65026 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 17:56:58.287605   65026 start.go:340] cluster config:
	{Name:enable-default-cni-726705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:56:58.287707   65026 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:56:58.289604   65026 out.go:177] * Starting "enable-default-cni-726705" primary control-plane node in "enable-default-cni-726705" cluster
	I0416 17:56:58.291040   65026 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:56:58.291084   65026 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:56:58.291104   65026 cache.go:56] Caching tarball of preloaded images
	I0416 17:56:58.291182   65026 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:56:58.291197   65026 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:56:58.291334   65026 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/config.json ...
	I0416 17:56:58.291364   65026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/config.json: {Name:mk4e80f0dc6e8cf35d05c5607e135efc28d7c187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:56:58.291555   65026 start.go:360] acquireMachinesLock for enable-default-cni-726705: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:56:58.291601   65026 start.go:364] duration metric: took 21.891µs to acquireMachinesLock for "enable-default-cni-726705"
	I0416 17:56:58.291621   65026 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:56:58.291996   65026 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 17:56:58.294235   65026 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0416 17:56:58.294417   65026 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:56:58.294450   65026 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:56:58.308535   65026 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0416 17:56:58.309012   65026 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:56:58.309507   65026 main.go:141] libmachine: Using API Version  1
	I0416 17:56:58.309522   65026 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:56:58.309877   65026 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:56:58.310069   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetMachineName
	I0416 17:56:58.310232   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:56:58.310429   65026 start.go:159] libmachine.API.Create for "enable-default-cni-726705" (driver="kvm2")
	I0416 17:56:58.310464   65026 client.go:168] LocalClient.Create starting
	I0416 17:56:58.310493   65026 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 17:56:58.310527   65026 main.go:141] libmachine: Decoding PEM data...
	I0416 17:56:58.310546   65026 main.go:141] libmachine: Parsing certificate...
	I0416 17:56:58.310606   65026 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 17:56:58.310625   65026 main.go:141] libmachine: Decoding PEM data...
	I0416 17:56:58.310637   65026 main.go:141] libmachine: Parsing certificate...
	I0416 17:56:58.310654   65026 main.go:141] libmachine: Running pre-create checks...
	I0416 17:56:58.310663   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .PreCreateCheck
	I0416 17:56:58.310963   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetConfigRaw
	I0416 17:56:58.311332   65026 main.go:141] libmachine: Creating machine...
	I0416 17:56:58.311348   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .Create
	I0416 17:56:58.311472   65026 main.go:141] libmachine: (enable-default-cni-726705) Creating KVM machine...
	I0416 17:56:58.312681   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found existing default KVM network
	I0416 17:56:58.313913   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:56:58.313757   65049 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:c7:a6} reservation:<nil>}
	I0416 17:56:58.314827   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:56:58.314744   65049 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7f:0c:2e} reservation:<nil>}
	I0416 17:56:58.315908   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:56:58.315815   65049 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b8ba0}
	I0416 17:56:58.315932   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | created network xml: 
	I0416 17:56:58.315948   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | <network>
	I0416 17:56:58.315960   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |   <name>mk-enable-default-cni-726705</name>
	I0416 17:56:58.315995   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |   <dns enable='no'/>
	I0416 17:56:58.316009   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |   
	I0416 17:56:58.316025   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0416 17:56:58.316036   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |     <dhcp>
	I0416 17:56:58.316057   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0416 17:56:58.316118   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |     </dhcp>
	I0416 17:56:58.316151   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |   </ip>
	I0416 17:56:58.316161   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG |   
	I0416 17:56:58.316174   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | </network>
	I0416 17:56:58.316186   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | 
	I0416 17:56:58.321088   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | trying to create private KVM network mk-enable-default-cni-726705 192.168.61.0/24...
	I0416 17:56:58.394050   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | private KVM network mk-enable-default-cni-726705 192.168.61.0/24 created
	I0416 17:56:58.394080   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:56:58.394021   65049 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:56:58.394096   65026 main.go:141] libmachine: (enable-default-cni-726705) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705 ...
	I0416 17:56:58.394113   65026 main.go:141] libmachine: (enable-default-cni-726705) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 17:56:58.394267   65026 main.go:141] libmachine: (enable-default-cni-726705) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:56:58.619970   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:56:58.619856   65049 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/id_rsa...
	I0416 17:56:58.756574   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:56:58.756446   65049 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/enable-default-cni-726705.rawdisk...
	I0416 17:56:58.756602   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Writing magic tar header
	I0416 17:56:58.756618   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Writing SSH key tar header
	I0416 17:56:58.756626   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:56:58.756578   65049 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705 ...
	I0416 17:56:58.756710   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705
	I0416 17:56:58.756729   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 17:56:58.756742   65026 main.go:141] libmachine: (enable-default-cni-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705 (perms=drwx------)
	I0416 17:56:58.756758   65026 main.go:141] libmachine: (enable-default-cni-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 17:56:58.756769   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:56:58.756778   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 17:56:58.756787   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 17:56:58.756795   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Checking permissions on dir: /home/jenkins
	I0416 17:56:58.756803   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Checking permissions on dir: /home
	I0416 17:56:58.756814   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Skipping /home - not owner
	I0416 17:56:58.756823   65026 main.go:141] libmachine: (enable-default-cni-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 17:56:58.756890   65026 main.go:141] libmachine: (enable-default-cni-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 17:56:58.756919   65026 main.go:141] libmachine: (enable-default-cni-726705) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 17:56:58.756931   65026 main.go:141] libmachine: (enable-default-cni-726705) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 17:56:58.756946   65026 main.go:141] libmachine: (enable-default-cni-726705) Creating domain...
	I0416 17:56:58.758245   65026 main.go:141] libmachine: (enable-default-cni-726705) define libvirt domain using xml: 
	I0416 17:56:58.758266   65026 main.go:141] libmachine: (enable-default-cni-726705) <domain type='kvm'>
	I0416 17:56:58.758277   65026 main.go:141] libmachine: (enable-default-cni-726705)   <name>enable-default-cni-726705</name>
	I0416 17:56:58.758292   65026 main.go:141] libmachine: (enable-default-cni-726705)   <memory unit='MiB'>3072</memory>
	I0416 17:56:58.758306   65026 main.go:141] libmachine: (enable-default-cni-726705)   <vcpu>2</vcpu>
	I0416 17:56:58.758317   65026 main.go:141] libmachine: (enable-default-cni-726705)   <features>
	I0416 17:56:58.758327   65026 main.go:141] libmachine: (enable-default-cni-726705)     <acpi/>
	I0416 17:56:58.758341   65026 main.go:141] libmachine: (enable-default-cni-726705)     <apic/>
	I0416 17:56:58.758354   65026 main.go:141] libmachine: (enable-default-cni-726705)     <pae/>
	I0416 17:56:58.758369   65026 main.go:141] libmachine: (enable-default-cni-726705)     
	I0416 17:56:58.758386   65026 main.go:141] libmachine: (enable-default-cni-726705)   </features>
	I0416 17:56:58.758398   65026 main.go:141] libmachine: (enable-default-cni-726705)   <cpu mode='host-passthrough'>
	I0416 17:56:58.758411   65026 main.go:141] libmachine: (enable-default-cni-726705)   
	I0416 17:56:58.758422   65026 main.go:141] libmachine: (enable-default-cni-726705)   </cpu>
	I0416 17:56:58.758435   65026 main.go:141] libmachine: (enable-default-cni-726705)   <os>
	I0416 17:56:58.758451   65026 main.go:141] libmachine: (enable-default-cni-726705)     <type>hvm</type>
	I0416 17:56:58.758464   65026 main.go:141] libmachine: (enable-default-cni-726705)     <boot dev='cdrom'/>
	I0416 17:56:58.758480   65026 main.go:141] libmachine: (enable-default-cni-726705)     <boot dev='hd'/>
	I0416 17:56:58.758494   65026 main.go:141] libmachine: (enable-default-cni-726705)     <bootmenu enable='no'/>
	I0416 17:56:58.758505   65026 main.go:141] libmachine: (enable-default-cni-726705)   </os>
	I0416 17:56:58.758530   65026 main.go:141] libmachine: (enable-default-cni-726705)   <devices>
	I0416 17:56:58.758563   65026 main.go:141] libmachine: (enable-default-cni-726705)     <disk type='file' device='cdrom'>
	I0416 17:56:58.758591   65026 main.go:141] libmachine: (enable-default-cni-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/boot2docker.iso'/>
	I0416 17:56:58.758604   65026 main.go:141] libmachine: (enable-default-cni-726705)       <target dev='hdc' bus='scsi'/>
	I0416 17:56:58.758617   65026 main.go:141] libmachine: (enable-default-cni-726705)       <readonly/>
	I0416 17:56:58.758628   65026 main.go:141] libmachine: (enable-default-cni-726705)     </disk>
	I0416 17:56:58.758656   65026 main.go:141] libmachine: (enable-default-cni-726705)     <disk type='file' device='disk'>
	I0416 17:56:58.758682   65026 main.go:141] libmachine: (enable-default-cni-726705)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 17:56:58.758701   65026 main.go:141] libmachine: (enable-default-cni-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/enable-default-cni-726705.rawdisk'/>
	I0416 17:56:58.758713   65026 main.go:141] libmachine: (enable-default-cni-726705)       <target dev='hda' bus='virtio'/>
	I0416 17:56:58.758725   65026 main.go:141] libmachine: (enable-default-cni-726705)     </disk>
	I0416 17:56:58.758736   65026 main.go:141] libmachine: (enable-default-cni-726705)     <interface type='network'>
	I0416 17:56:58.758748   65026 main.go:141] libmachine: (enable-default-cni-726705)       <source network='mk-enable-default-cni-726705'/>
	I0416 17:56:58.758759   65026 main.go:141] libmachine: (enable-default-cni-726705)       <model type='virtio'/>
	I0416 17:56:58.758772   65026 main.go:141] libmachine: (enable-default-cni-726705)     </interface>
	I0416 17:56:58.758781   65026 main.go:141] libmachine: (enable-default-cni-726705)     <interface type='network'>
	I0416 17:56:58.758794   65026 main.go:141] libmachine: (enable-default-cni-726705)       <source network='default'/>
	I0416 17:56:58.758805   65026 main.go:141] libmachine: (enable-default-cni-726705)       <model type='virtio'/>
	I0416 17:56:58.758816   65026 main.go:141] libmachine: (enable-default-cni-726705)     </interface>
	I0416 17:56:58.758826   65026 main.go:141] libmachine: (enable-default-cni-726705)     <serial type='pty'>
	I0416 17:56:58.758839   65026 main.go:141] libmachine: (enable-default-cni-726705)       <target port='0'/>
	I0416 17:56:58.758858   65026 main.go:141] libmachine: (enable-default-cni-726705)     </serial>
	I0416 17:56:58.758877   65026 main.go:141] libmachine: (enable-default-cni-726705)     <console type='pty'>
	I0416 17:56:58.758896   65026 main.go:141] libmachine: (enable-default-cni-726705)       <target type='serial' port='0'/>
	I0416 17:56:58.758908   65026 main.go:141] libmachine: (enable-default-cni-726705)     </console>
	I0416 17:56:58.758919   65026 main.go:141] libmachine: (enable-default-cni-726705)     <rng model='virtio'>
	I0416 17:56:58.758932   65026 main.go:141] libmachine: (enable-default-cni-726705)       <backend model='random'>/dev/random</backend>
	I0416 17:56:58.758943   65026 main.go:141] libmachine: (enable-default-cni-726705)     </rng>
	I0416 17:56:58.758955   65026 main.go:141] libmachine: (enable-default-cni-726705)     
	I0416 17:56:58.758966   65026 main.go:141] libmachine: (enable-default-cni-726705)     
	I0416 17:56:58.758978   65026 main.go:141] libmachine: (enable-default-cni-726705)   </devices>
	I0416 17:56:58.758994   65026 main.go:141] libmachine: (enable-default-cni-726705) </domain>
	I0416 17:56:58.759006   65026 main.go:141] libmachine: (enable-default-cni-726705) 
	I0416 17:56:58.762969   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:ad:2a:dc in network default
	I0416 17:56:58.763599   65026 main.go:141] libmachine: (enable-default-cni-726705) Ensuring networks are active...
	I0416 17:56:58.763621   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:56:58.764353   65026 main.go:141] libmachine: (enable-default-cni-726705) Ensuring network default is active
	I0416 17:56:58.764803   65026 main.go:141] libmachine: (enable-default-cni-726705) Ensuring network mk-enable-default-cni-726705 is active
	I0416 17:56:58.765372   65026 main.go:141] libmachine: (enable-default-cni-726705) Getting domain xml...
	I0416 17:56:58.766341   65026 main.go:141] libmachine: (enable-default-cni-726705) Creating domain...
	I0416 17:57:00.140831   65026 main.go:141] libmachine: (enable-default-cni-726705) Waiting to get IP...
	I0416 17:57:00.141837   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:00.142384   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:00.142488   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:00.142384   65049 retry.go:31] will retry after 254.892623ms: waiting for machine to come up
	I0416 17:57:00.399153   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:00.399743   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:00.399765   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:00.399707   65049 retry.go:31] will retry after 283.524449ms: waiting for machine to come up
	I0416 17:57:00.685351   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:00.685872   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:00.685914   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:00.685847   65049 retry.go:31] will retry after 487.344312ms: waiting for machine to come up
	I0416 17:57:01.174522   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:01.175141   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:01.175173   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:01.175064   65049 retry.go:31] will retry after 435.747963ms: waiting for machine to come up
	I0416 17:57:01.612963   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:01.613642   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:01.613663   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:01.613574   65049 retry.go:31] will retry after 624.744446ms: waiting for machine to come up
	I0416 17:57:02.240560   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:02.241151   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:02.241183   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:02.241074   65049 retry.go:31] will retry after 773.844509ms: waiting for machine to come up
	I0416 17:57:03.016203   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:03.016759   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:03.016788   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:03.016715   65049 retry.go:31] will retry after 860.608357ms: waiting for machine to come up
	I0416 17:57:06.331335   59445 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (34.788832325s)
	I0416 17:57:06.331417   59445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:57:06.350570   59445 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:57:06.363213   59445 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:57:06.375430   59445 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:06.375457   59445 kubeadm.go:156] found existing configuration files:
	
	I0416 17:57:06.375509   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 17:57:06.388443   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:06.388508   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:57:06.404668   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 17:57:06.420155   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:06.420225   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:57:06.436127   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 17:57:06.451236   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:06.451300   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:57:06.467001   59445 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 17:57:06.481738   59445 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:06.481804   59445 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:57:06.494520   59445 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:57:06.730796   59445 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:07.622082   64516 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 17:57:07.622132   64516 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:57:07.622209   64516 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:07.622335   64516 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:07.622467   64516 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:07.622559   64516 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:07.624379   64516 out.go:204]   - Generating certificates and keys ...
	I0416 17:57:07.624475   64516 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:57:07.624536   64516 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:07.624613   64516 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:07.624681   64516 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:07.624774   64516 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:07.624884   64516 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:07.624977   64516 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:57:07.625150   64516 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [flannel-726705 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	I0416 17:57:07.625247   64516 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:07.625428   64516 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [flannel-726705 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	I0416 17:57:07.625526   64516 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:07.625633   64516 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:07.625702   64516 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:57:07.625780   64516 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:07.625859   64516 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:07.625935   64516 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:07.626008   64516 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:07.626097   64516 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:07.626195   64516 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:07.626347   64516 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:07.626462   64516 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:07.628332   64516 out.go:204]   - Booting up control plane ...
	I0416 17:57:07.628428   64516 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:07.628508   64516 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:07.628583   64516 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:07.628704   64516 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:07.628813   64516 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:07.628884   64516 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:57:07.629058   64516 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:07.629155   64516 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003010 seconds
	I0416 17:57:07.629241   64516 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:07.629356   64516 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:07.629421   64516 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:07.629585   64516 kubeadm.go:309] [mark-control-plane] Marking the node flannel-726705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:07.629652   64516 kubeadm.go:309] [bootstrap-token] Using token: d8xz44.ktrfeuf1ymvsg39j
	I0416 17:57:07.631367   64516 out.go:204]   - Configuring RBAC rules ...
	I0416 17:57:07.631473   64516 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:07.631590   64516 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:07.631763   64516 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:07.631961   64516 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:07.632132   64516 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:07.632271   64516 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:07.632425   64516 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:07.632499   64516 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 17:57:07.632561   64516 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 17:57:07.632570   64516 kubeadm.go:309] 
	I0416 17:57:07.632649   64516 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:07.632659   64516 kubeadm.go:309] 
	I0416 17:57:07.632774   64516 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:07.632786   64516 kubeadm.go:309] 
	I0416 17:57:07.632821   64516 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 17:57:07.632916   64516 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:07.632998   64516 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:07.633009   64516 kubeadm.go:309] 
	I0416 17:57:07.633094   64516 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 17:57:07.633104   64516 kubeadm.go:309] 
	I0416 17:57:07.633178   64516 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:07.633191   64516 kubeadm.go:309] 
	I0416 17:57:07.633268   64516 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 17:57:07.633374   64516 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:07.633480   64516 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:07.633492   64516 kubeadm.go:309] 
	I0416 17:57:07.633622   64516 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:07.633736   64516 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 17:57:07.633746   64516 kubeadm.go:309] 
	I0416 17:57:07.633863   64516 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d8xz44.ktrfeuf1ymvsg39j \
	I0416 17:57:07.633990   64516 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 17:57:07.634022   64516 kubeadm.go:309] 	--control-plane 
	I0416 17:57:07.634038   64516 kubeadm.go:309] 
	I0416 17:57:07.634141   64516 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:07.634145   64516 kubeadm.go:309] 
	I0416 17:57:07.634243   64516 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d8xz44.ktrfeuf1ymvsg39j \
	I0416 17:57:07.634398   64516 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 17:57:07.634413   64516 cni.go:84] Creating CNI manager for "flannel"
	I0416 17:57:07.636205   64516 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0416 17:57:07.637553   64516 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0416 17:57:07.651517   64516 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 17:57:07.651534   64516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0416 17:57:07.710822   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 17:57:03.879005   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:03.879596   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:03.879623   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:03.879545   65049 retry.go:31] will retry after 1.346009441s: waiting for machine to come up
	I0416 17:57:05.227600   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:05.228105   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:05.228145   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:05.228056   65049 retry.go:31] will retry after 1.840374884s: waiting for machine to come up
	I0416 17:57:07.070241   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:07.070699   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:07.070722   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:07.070664   65049 retry.go:31] will retry after 2.040676021s: waiting for machine to come up
	I0416 17:57:08.440288   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:08.440354   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-726705 minikube.k8s.io/updated_at=2024_04_16T17_57_08_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=flannel-726705 minikube.k8s.io/primary=true
	I0416 17:57:08.440374   64516 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:57:08.468218   64516 ops.go:34] apiserver oom_adj: -16
	I0416 17:57:08.616365   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:09.117409   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:09.616769   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:10.117082   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:10.616718   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:11.116962   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:11.617455   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:12.116982   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:12.616539   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:09.112519   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:09.112998   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:09.113025   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:09.112965   65049 retry.go:31] will retry after 2.711763152s: waiting for machine to come up
	I0416 17:57:11.826959   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:11.827689   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:11.827718   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:11.827637   65049 retry.go:31] will retry after 2.948653167s: waiting for machine to come up
	I0416 17:57:16.291599   59445 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 17:57:16.291699   59445 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:57:16.291790   59445 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:16.291920   59445 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:16.292053   59445 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:16.292141   59445 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:16.293566   59445 out.go:204]   - Generating certificates and keys ...
	I0416 17:57:16.293656   59445 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:57:16.293752   59445 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:16.293864   59445 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 17:57:16.293945   59445 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 17:57:16.294098   59445 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 17:57:16.294201   59445 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 17:57:16.294281   59445 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 17:57:16.294356   59445 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 17:57:16.294454   59445 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 17:57:16.294584   59445 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 17:57:16.294655   59445 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 17:57:16.294745   59445 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:16.294812   59445 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:16.294865   59445 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:16.294928   59445 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:16.295015   59445 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:16.295088   59445 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:16.295204   59445 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:16.295306   59445 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:16.296880   59445 out.go:204]   - Booting up control plane ...
	I0416 17:57:16.296994   59445 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:16.297106   59445 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:16.297175   59445 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:16.297260   59445 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:16.297343   59445 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:16.297400   59445 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:57:16.297631   59445 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:16.297750   59445 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503207 seconds
	I0416 17:57:16.297893   59445 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:16.298068   59445 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:16.298156   59445 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:16.298406   59445 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-304316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:16.298489   59445 kubeadm.go:309] [bootstrap-token] Using token: wo3c33.715belgrwrx3ra0o
	I0416 17:57:16.299884   59445 out.go:204]   - Configuring RBAC rules ...
	I0416 17:57:16.300021   59445 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:16.300146   59445 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:16.300350   59445 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:16.300535   59445 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:16.300704   59445 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:16.300821   59445 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:16.300981   59445 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:16.301036   59445 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 17:57:16.301099   59445 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 17:57:16.301108   59445 kubeadm.go:309] 
	I0416 17:57:16.301191   59445 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:16.301201   59445 kubeadm.go:309] 
	I0416 17:57:16.301284   59445 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:16.301296   59445 kubeadm.go:309] 
	I0416 17:57:16.301327   59445 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 17:57:16.301410   59445 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:16.301478   59445 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:16.301488   59445 kubeadm.go:309] 
	I0416 17:57:16.301583   59445 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 17:57:16.301597   59445 kubeadm.go:309] 
	I0416 17:57:16.301661   59445 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:16.301673   59445 kubeadm.go:309] 
	I0416 17:57:16.301761   59445 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 17:57:16.301886   59445 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:16.301994   59445 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:16.302004   59445 kubeadm.go:309] 
	I0416 17:57:16.302105   59445 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:16.302221   59445 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 17:57:16.302232   59445 kubeadm.go:309] 
	I0416 17:57:16.302354   59445 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token wo3c33.715belgrwrx3ra0o \
	I0416 17:57:16.302504   59445 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 17:57:16.302535   59445 kubeadm.go:309] 	--control-plane 
	I0416 17:57:16.302544   59445 kubeadm.go:309] 
	I0416 17:57:16.302676   59445 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:16.302693   59445 kubeadm.go:309] 
	I0416 17:57:16.302787   59445 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token wo3c33.715belgrwrx3ra0o \
	I0416 17:57:16.302924   59445 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 17:57:16.302936   59445 cni.go:84] Creating CNI manager for ""
	I0416 17:57:16.302945   59445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 17:57:16.304426   59445 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 17:57:16.305822   59445 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 17:57:16.350278   59445 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 17:57:16.424964   59445 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 17:57:16.425042   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:16.425082   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-304316 minikube.k8s.io/updated_at=2024_04_16T17_57_16_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=default-k8s-diff-port-304316 minikube.k8s.io/primary=true
	I0416 17:57:16.759034   59445 ops.go:34] apiserver oom_adj: -16
	I0416 17:57:16.759103   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:13.116891   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:13.617402   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:14.116502   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:14.617189   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:15.116577   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:15.617322   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:16.116505   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:16.617110   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:17.116414   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:17.617173   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:14.777906   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:14.778576   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:14.778601   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:14.778526   65049 retry.go:31] will retry after 3.94804781s: waiting for machine to come up
	I0416 17:57:18.116967   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:18.617463   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:19.117349   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:19.616388   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:20.116480   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:20.616523   64516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:20.793367   64516 kubeadm.go:1107] duration metric: took 12.353124987s to wait for elevateKubeSystemPrivileges
	W0416 17:57:20.793419   64516 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:57:20.793431   64516 kubeadm.go:393] duration metric: took 25.267446378s to StartCluster
	I0416 17:57:20.793457   64516 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:20.793549   64516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:57:20.795655   64516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:20.795898   64516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 17:57:20.795910   64516 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:57:20.797436   64516 out.go:177] * Verifying Kubernetes components...
	I0416 17:57:20.795988   64516 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:57:20.796190   64516 config.go:182] Loaded profile config "flannel-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:57:20.798977   64516 addons.go:69] Setting storage-provisioner=true in profile "flannel-726705"
	I0416 17:57:20.799001   64516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:20.799031   64516 addons.go:234] Setting addon storage-provisioner=true in "flannel-726705"
	I0416 17:57:20.799078   64516 addons.go:69] Setting default-storageclass=true in profile "flannel-726705"
	I0416 17:57:20.799114   64516 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-726705"
	I0416 17:57:20.799080   64516 host.go:66] Checking if "flannel-726705" exists ...
	I0416 17:57:20.799651   64516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:20.799658   64516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:20.799698   64516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:20.799781   64516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:20.816128   64516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0416 17:57:20.816635   64516 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:20.817211   64516 main.go:141] libmachine: Using API Version  1
	I0416 17:57:20.817240   64516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:20.817621   64516 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:20.817898   64516 main.go:141] libmachine: (flannel-726705) Calling .GetState
	I0416 17:57:20.818915   64516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0416 17:57:20.819359   64516 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:20.819882   64516 main.go:141] libmachine: Using API Version  1
	I0416 17:57:20.819904   64516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:20.820937   64516 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:20.821641   64516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:20.821672   64516 addons.go:234] Setting addon default-storageclass=true in "flannel-726705"
	I0416 17:57:20.821681   64516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:20.821705   64516 host.go:66] Checking if "flannel-726705" exists ...
	I0416 17:57:20.822075   64516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:20.822111   64516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:20.837238   64516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0416 17:57:20.837712   64516 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:20.838262   64516 main.go:141] libmachine: Using API Version  1
	I0416 17:57:20.838288   64516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:20.838821   64516 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:20.839264   64516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:20.839303   64516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:20.840249   64516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37481
	I0416 17:57:20.840678   64516 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:20.841196   64516 main.go:141] libmachine: Using API Version  1
	I0416 17:57:20.841218   64516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:20.841647   64516 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:20.841852   64516 main.go:141] libmachine: (flannel-726705) Calling .GetState
	I0416 17:57:20.845636   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:57:20.848011   64516 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:17.259392   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:17.759348   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:18.260182   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:18.760047   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:19.260023   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:19.760017   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:20.259330   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:20.759221   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:21.259338   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:21.760039   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:20.849606   64516 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:20.849627   64516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:57:20.849651   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:57:20.852787   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:57:20.853238   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:57:20.853265   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:57:20.853407   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:57:20.853569   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:57:20.853713   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:57:20.853846   64516 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa Username:docker}
	I0416 17:57:20.858405   64516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0416 17:57:20.858882   64516 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:20.859441   64516 main.go:141] libmachine: Using API Version  1
	I0416 17:57:20.859462   64516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:20.859775   64516 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:20.860034   64516 main.go:141] libmachine: (flannel-726705) Calling .GetState
	I0416 17:57:20.861776   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:57:20.862165   64516 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:20.862182   64516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:57:20.862197   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:57:20.864827   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:57:20.865230   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:57:20.865249   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:57:20.865505   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:57:20.865745   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:57:20.865897   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:57:20.866100   64516 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa Username:docker}
	I0416 17:57:21.082312   64516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:21.082524   64516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 17:57:21.118725   64516 node_ready.go:35] waiting up to 15m0s for node "flannel-726705" to be "Ready" ...
	I0416 17:57:21.244943   64516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:21.346767   64516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:21.545566   64516 start.go:946] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0416 17:57:21.545655   64516 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:21.545678   64516 main.go:141] libmachine: (flannel-726705) Calling .Close
	I0416 17:57:21.546021   64516 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:21.546035   64516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:21.546046   64516 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:21.546057   64516 main.go:141] libmachine: (flannel-726705) Calling .Close
	I0416 17:57:21.546321   64516 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:21.546373   64516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:21.546370   64516 main.go:141] libmachine: (flannel-726705) DBG | Closing plugin on server side
	I0416 17:57:21.562873   64516 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:21.562896   64516 main.go:141] libmachine: (flannel-726705) Calling .Close
	I0416 17:57:21.563174   64516 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:21.563188   64516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:22.054614   64516 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-726705" context rescaled to 1 replicas
	I0416 17:57:22.092338   64516 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:22.092367   64516 main.go:141] libmachine: (flannel-726705) Calling .Close
	I0416 17:57:22.092763   64516 main.go:141] libmachine: (flannel-726705) DBG | Closing plugin on server side
	I0416 17:57:22.092809   64516 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:22.092855   64516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:22.092866   64516 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:22.092874   64516 main.go:141] libmachine: (flannel-726705) Calling .Close
	I0416 17:57:22.093112   64516 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:22.093131   64516 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:22.095786   64516 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0416 17:57:22.097161   64516 addons.go:505] duration metric: took 1.301190175s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0416 17:57:18.728184   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:18.728623   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find current IP address of domain enable-default-cni-726705 in network mk-enable-default-cni-726705
	I0416 17:57:18.728646   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | I0416 17:57:18.728583   65049 retry.go:31] will retry after 4.915219359s: waiting for machine to come up
	I0416 17:57:22.260018   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:22.759219   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:23.259775   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:23.759394   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:24.259382   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:24.759193   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:25.260144   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:25.760113   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:26.259221   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:26.759471   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:23.645327   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.645888   65026 main.go:141] libmachine: (enable-default-cni-726705) Found IP for machine: 192.168.61.204
	I0416 17:57:23.645915   65026 main.go:141] libmachine: (enable-default-cni-726705) Reserving static IP address...
	I0416 17:57:23.645932   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has current primary IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.646456   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-726705", mac: "52:54:00:24:ce:ae", ip: "192.168.61.204"} in network mk-enable-default-cni-726705
	I0416 17:57:23.722386   65026 main.go:141] libmachine: (enable-default-cni-726705) Reserved static IP address: 192.168.61.204
	I0416 17:57:23.722429   65026 main.go:141] libmachine: (enable-default-cni-726705) Waiting for SSH to be available...
	I0416 17:57:23.722438   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Getting to WaitForSSH function...
	I0416 17:57:23.725315   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.725748   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:23.725777   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.725953   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Using SSH client type: external
	I0416 17:57:23.725989   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/id_rsa (-rw-------)
	I0416 17:57:23.726018   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:57:23.726034   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | About to run SSH command:
	I0416 17:57:23.726051   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | exit 0
	I0416 17:57:23.861326   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | SSH cmd err, output: <nil>: 
	I0416 17:57:23.861640   65026 main.go:141] libmachine: (enable-default-cni-726705) KVM machine creation complete!
	I0416 17:57:23.862032   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetConfigRaw
	I0416 17:57:23.862570   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:57:23.862791   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:57:23.862949   65026 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 17:57:23.862967   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetState
	I0416 17:57:23.864470   65026 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 17:57:23.864484   65026 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 17:57:23.864490   65026 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 17:57:23.864496   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:23.866874   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.867297   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:23.867320   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.867511   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:23.867673   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:23.867807   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:23.867905   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:23.868017   65026 main.go:141] libmachine: Using SSH client type: native
	I0416 17:57:23.868240   65026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0416 17:57:23.868255   65026 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 17:57:23.985273   65026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:57:23.985299   65026 main.go:141] libmachine: Detecting the provisioner...
	I0416 17:57:23.985310   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:23.988350   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.988764   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:23.988790   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:23.989009   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:23.989205   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:23.989387   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:23.989538   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:23.989710   65026 main.go:141] libmachine: Using SSH client type: native
	I0416 17:57:23.989951   65026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0416 17:57:23.989969   65026 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 17:57:24.102721   65026 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 17:57:24.102785   65026 main.go:141] libmachine: found compatible host: buildroot
	I0416 17:57:24.102802   65026 main.go:141] libmachine: Provisioning with buildroot...
	I0416 17:57:24.102814   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetMachineName
	I0416 17:57:24.103101   65026 buildroot.go:166] provisioning hostname "enable-default-cni-726705"
	I0416 17:57:24.103134   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetMachineName
	I0416 17:57:24.103347   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:24.106038   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.106383   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:24.106415   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.106543   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:24.106726   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:24.106886   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:24.107118   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:24.107254   65026 main.go:141] libmachine: Using SSH client type: native
	I0416 17:57:24.107407   65026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0416 17:57:24.107420   65026 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-726705 && echo "enable-default-cni-726705" | sudo tee /etc/hostname
	I0416 17:57:24.238259   65026 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-726705
	
	I0416 17:57:24.238283   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:24.241566   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.241835   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:24.241877   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.242084   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:24.242322   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:24.242491   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:24.242650   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:24.242812   65026 main.go:141] libmachine: Using SSH client type: native
	I0416 17:57:24.243043   65026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0416 17:57:24.243070   65026 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-726705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-726705/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-726705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:57:24.370513   65026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:57:24.370546   65026 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:57:24.370569   65026 buildroot.go:174] setting up certificates
	I0416 17:57:24.370581   65026 provision.go:84] configureAuth start
	I0416 17:57:24.370592   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetMachineName
	I0416 17:57:24.370888   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetIP
	I0416 17:57:24.374234   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.374637   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:24.374662   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.374862   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:24.377798   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.378215   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:24.378245   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.378369   65026 provision.go:143] copyHostCerts
	I0416 17:57:24.378433   65026 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:57:24.378456   65026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:57:24.378527   65026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:57:24.378642   65026 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:57:24.378654   65026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:57:24.378692   65026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:57:24.378771   65026 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:57:24.378783   65026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:57:24.378821   65026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:57:24.378915   65026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-726705 san=[127.0.0.1 192.168.61.204 enable-default-cni-726705 localhost minikube]
	I0416 17:57:24.517531   65026 provision.go:177] copyRemoteCerts
	I0416 17:57:24.517596   65026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:57:24.517626   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:24.520706   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.521140   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:24.521167   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.521413   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:24.521602   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:24.521792   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:24.521930   65026 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/id_rsa Username:docker}
	I0416 17:57:24.618024   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:57:24.652658   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0416 17:57:24.684078   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 17:57:24.719015   65026 provision.go:87] duration metric: took 348.422028ms to configureAuth
	I0416 17:57:24.719044   65026 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:57:24.719254   65026 config.go:182] Loaded profile config "enable-default-cni-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:57:24.719334   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:24.722492   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.722816   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:24.722848   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:24.723066   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:24.723293   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:24.723478   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:24.723670   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:24.723880   65026 main.go:141] libmachine: Using SSH client type: native
	I0416 17:57:24.724094   65026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0416 17:57:24.724118   65026 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:57:25.070985   65026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:57:25.071013   65026 main.go:141] libmachine: Checking connection to Docker...
	I0416 17:57:25.071026   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetURL
	I0416 17:57:25.072464   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | Using libvirt version 6000000
	I0416 17:57:25.075048   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.075480   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:25.075515   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.075707   65026 main.go:141] libmachine: Docker is up and running!
	I0416 17:57:25.075723   65026 main.go:141] libmachine: Reticulating splines...
	I0416 17:57:25.075732   65026 client.go:171] duration metric: took 26.765260222s to LocalClient.Create
	I0416 17:57:25.075757   65026 start.go:167] duration metric: took 26.765328866s to libmachine.API.Create "enable-default-cni-726705"
	I0416 17:57:25.075770   65026 start.go:293] postStartSetup for "enable-default-cni-726705" (driver="kvm2")
	I0416 17:57:25.075788   65026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:57:25.075820   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:57:25.076077   65026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:57:25.076120   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:25.078479   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.078850   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:25.078885   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.079019   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:25.079207   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:25.079360   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:25.079641   65026 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/id_rsa Username:docker}
	I0416 17:57:25.175975   65026 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:57:25.181466   65026 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:57:25.181490   65026 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:57:25.181562   65026 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:57:25.181666   65026 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:57:25.181780   65026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:57:25.193085   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:57:25.228744   65026 start.go:296] duration metric: took 152.955528ms for postStartSetup
	I0416 17:57:25.228803   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetConfigRaw
	I0416 17:57:25.229500   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetIP
	I0416 17:57:25.232518   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.232927   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:25.232975   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.233205   65026 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/config.json ...
	I0416 17:57:25.233427   65026 start.go:128] duration metric: took 26.94141598s to createHost
	I0416 17:57:25.233460   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:25.235881   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.236313   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:25.236342   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.236528   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:25.236699   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:25.236891   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:25.237111   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:25.237333   65026 main.go:141] libmachine: Using SSH client type: native
	I0416 17:57:25.237526   65026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0416 17:57:25.237547   65026 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:57:25.358704   65026 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290245.342266708
	
	I0416 17:57:25.358728   65026 fix.go:216] guest clock: 1713290245.342266708
	I0416 17:57:25.358740   65026 fix.go:229] Guest: 2024-04-16 17:57:25.342266708 +0000 UTC Remote: 2024-04-16 17:57:25.2334453 +0000 UTC m=+27.065769552 (delta=108.821408ms)
	I0416 17:57:25.358765   65026 fix.go:200] guest clock delta is within tolerance: 108.821408ms
	I0416 17:57:25.358771   65026 start.go:83] releasing machines lock for "enable-default-cni-726705", held for 27.067159768s
	I0416 17:57:25.358796   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:57:25.359056   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetIP
	I0416 17:57:25.362359   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.362779   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:25.362807   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.363017   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:57:25.363595   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:57:25.363813   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .DriverName
	I0416 17:57:25.363917   65026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:57:25.363964   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:25.364122   65026 ssh_runner.go:195] Run: cat /version.json
	I0416 17:57:25.364146   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHHostname
	I0416 17:57:25.367491   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.367869   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:25.367912   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.368283   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:25.368312   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.368494   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:25.368602   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:25.368623   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:25.368664   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:25.368832   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHPort
	I0416 17:57:25.368883   65026 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/id_rsa Username:docker}
	I0416 17:57:25.368981   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHKeyPath
	I0416 17:57:25.369116   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetSSHUsername
	I0416 17:57:25.369281   65026 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/enable-default-cni-726705/id_rsa Username:docker}
	I0416 17:57:25.471334   65026 ssh_runner.go:195] Run: systemctl --version
	I0416 17:57:25.478161   65026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:57:25.654369   65026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:57:25.661361   65026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:57:25.661428   65026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:57:25.682746   65026 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:57:25.682769   65026 start.go:494] detecting cgroup driver to use...
	I0416 17:57:25.682833   65026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:57:25.706439   65026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:57:25.724035   65026 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:57:25.724099   65026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:57:25.742980   65026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:57:25.760219   65026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:57:25.922966   65026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:57:26.089778   65026 docker.go:233] disabling docker service ...
	I0416 17:57:26.089860   65026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:57:26.108349   65026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:57:26.128060   65026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:57:26.303799   65026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:57:26.456072   65026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:57:26.472805   65026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:57:26.496489   65026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:57:26.496559   65026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:57:26.511392   65026 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:57:26.511466   65026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:57:26.527077   65026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:57:26.543239   65026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:57:26.567004   65026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:57:26.581148   65026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:57:26.597884   65026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:57:26.628051   65026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:57:26.640699   65026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:57:26.652116   65026 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:57:26.652175   65026 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:57:26.667835   65026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:57:26.684306   65026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:26.848227   65026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:57:27.047950   65026 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:57:27.048034   65026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:57:27.054697   65026 start.go:562] Will wait 60s for crictl version
	I0416 17:57:27.054760   65026 ssh_runner.go:195] Run: which crictl
	I0416 17:57:27.059882   65026 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:57:27.106551   65026 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:57:27.106636   65026 ssh_runner.go:195] Run: crio --version
	I0416 17:57:27.143604   65026 ssh_runner.go:195] Run: crio --version
	I0416 17:57:27.179658   65026 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:57:23.124768   64516 node_ready.go:53] node "flannel-726705" has status "Ready":"False"
	I0416 17:57:25.622970   64516 node_ready.go:53] node "flannel-726705" has status "Ready":"False"
	I0416 17:57:27.635913   64516 node_ready.go:49] node "flannel-726705" has status "Ready":"True"
	I0416 17:57:27.635937   64516 node_ready.go:38] duration metric: took 6.517173437s for node "flannel-726705" to be "Ready" ...
	I0416 17:57:27.635948   64516 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:27.649901   64516 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-ttf84" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:27.180874   65026 main.go:141] libmachine: (enable-default-cni-726705) Calling .GetIP
	I0416 17:57:27.183960   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:27.184335   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:ce:ae", ip: ""} in network mk-enable-default-cni-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:57:15 +0000 UTC Type:0 Mac:52:54:00:24:ce:ae Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:enable-default-cni-726705 Clientid:01:52:54:00:24:ce:ae}
	I0416 17:57:27.184370   65026 main.go:141] libmachine: (enable-default-cni-726705) DBG | domain enable-default-cni-726705 has defined IP address 192.168.61.204 and MAC address 52:54:00:24:ce:ae in network mk-enable-default-cni-726705
	I0416 17:57:27.184571   65026 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0416 17:57:27.189439   65026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:27.204437   65026 kubeadm.go:877] updating cluster {Name:enable-default-cni-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:enable-default-cni-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.204 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:57:27.204579   65026 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:57:27.204647   65026 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:57:27.245792   65026 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 17:57:27.245875   65026 ssh_runner.go:195] Run: which lz4
	I0416 17:57:27.250705   65026 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 17:57:27.255820   65026 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:57:27.255856   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 17:57:27.259949   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:27.759901   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.259738   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:28.759769   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.259347   59445 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:57:29.463022   59445 kubeadm.go:1107] duration metric: took 13.03804813s to wait for elevateKubeSystemPrivileges
	W0416 17:57:29.463067   59445 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:57:29.463077   59445 kubeadm.go:393] duration metric: took 5m20.02490138s to StartCluster
	I0416 17:57:29.463097   59445 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:29.463188   59445 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:57:29.464676   59445 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:29.464972   59445 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:57:29.466819   59445 out.go:177] * Verifying Kubernetes components...
	I0416 17:57:29.465232   59445 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:57:29.465254   59445 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:57:29.468726   59445 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-304316"
	I0416 17:57:29.468759   59445 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-304316"
	W0416 17:57:29.468771   59445 addons.go:243] addon storage-provisioner should already be in state true
	I0416 17:57:29.468794   59445 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-304316"
	I0416 17:57:29.468815   59445 host.go:66] Checking if "default-k8s-diff-port-304316" exists ...
	I0416 17:57:29.468826   59445 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-304316"
	I0416 17:57:29.468904   59445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:29.468995   59445 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-304316"
	I0416 17:57:29.469023   59445 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-304316"
	W0416 17:57:29.469031   59445 addons.go:243] addon metrics-server should already be in state true
	I0416 17:57:29.469057   59445 host.go:66] Checking if "default-k8s-diff-port-304316" exists ...
	I0416 17:57:29.469257   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:29.469262   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:29.469277   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:29.469277   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:29.469499   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:29.469526   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:29.489990   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38105
	I0416 17:57:29.490503   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0416 17:57:29.490698   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:29.491215   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:57:29.491243   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:29.491334   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:29.491930   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:57:29.491946   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:29.492009   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:29.492613   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:29.492639   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:29.492914   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:29.493140   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:57:29.496929   59445 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-304316"
	W0416 17:57:29.496948   59445 addons.go:243] addon default-storageclass should already be in state true
	I0416 17:57:29.496976   59445 host.go:66] Checking if "default-k8s-diff-port-304316" exists ...
	I0416 17:57:29.497337   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:29.497369   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:29.499481   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0416 17:57:29.499954   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:29.500431   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:57:29.500446   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:29.500823   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:29.501613   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:29.501654   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:29.511958   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0416 17:57:29.512493   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:29.513089   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:57:29.513106   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:29.513482   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:29.513678   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:57:29.515556   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:57:29.517608   59445 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:57:29.519150   59445 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:29.519168   59445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:57:29.519185   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:57:29.517099   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0416 17:57:29.520122   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:29.520772   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:57:29.520790   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:29.521250   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:29.521906   59445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:57:29.521932   59445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:57:29.522518   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:57:29.527198   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0416 17:57:29.527247   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:57:29.527337   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:57:29.527357   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:57:29.527475   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:57:29.527660   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:29.527720   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:57:29.527905   59445 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:57:29.528309   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:57:29.528334   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:29.528688   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:29.528869   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:57:29.530559   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:57:29.532701   59445 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 17:57:29.534032   59445 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 17:57:29.534050   59445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 17:57:29.534070   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:57:29.536830   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:57:29.537323   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:57:29.537361   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:57:29.537551   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:57:29.537735   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:57:29.537888   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:57:29.538025   59445 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:57:29.543423   59445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0416 17:57:29.543826   59445 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:57:29.544415   59445 main.go:141] libmachine: Using API Version  1
	I0416 17:57:29.544438   59445 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:57:29.544789   59445 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:57:29.545000   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetState
	I0416 17:57:29.546750   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .DriverName
	I0416 17:57:29.549187   59445 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:29.549208   59445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:57:29.549229   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHHostname
	I0416 17:57:29.552407   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:57:29.552912   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:a7:9f", ip: ""} in network mk-default-k8s-diff-port-304316: {Iface:virbr1 ExpiryTime:2024-04-16 18:51:51 +0000 UTC Type:0 Mac:52:54:00:c6:a7:9f Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:default-k8s-diff-port-304316 Clientid:01:52:54:00:c6:a7:9f}
	I0416 17:57:29.552942   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | domain default-k8s-diff-port-304316 has defined IP address 192.168.39.6 and MAC address 52:54:00:c6:a7:9f in network mk-default-k8s-diff-port-304316
	I0416 17:57:29.553099   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHPort
	I0416 17:57:29.557047   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHKeyPath
	I0416 17:57:29.557248   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .GetSSHUsername
	I0416 17:57:29.557499   59445 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/default-k8s-diff-port-304316/id_rsa Username:docker}
	I0416 17:57:29.682930   59445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:29.712324   59445 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-304316" to be "Ready" ...
	I0416 17:57:29.725916   59445 node_ready.go:49] node "default-k8s-diff-port-304316" has status "Ready":"True"
	I0416 17:57:29.725944   59445 node_ready.go:38] duration metric: took 13.585959ms for node "default-k8s-diff-port-304316" to be "Ready" ...
	I0416 17:57:29.725956   59445 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:29.741865   59445 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2td7t" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:29.780548   59445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:57:29.830969   59445 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 17:57:29.831001   59445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 17:57:29.844211   59445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:57:29.898404   59445 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 17:57:29.898431   59445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 17:57:30.030334   59445 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 17:57:30.030412   59445 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 17:57:30.082450   59445 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 17:57:30.570044   59445 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:30.570079   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Close
	I0416 17:57:30.570055   59445 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:30.570158   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Close
	I0416 17:57:30.570502   59445 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:30.570523   59445 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:30.570549   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Closing plugin on server side
	I0416 17:57:30.570555   59445 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:30.570569   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Close
	I0416 17:57:30.570702   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Closing plugin on server side
	I0416 17:57:30.570733   59445 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:30.570791   59445 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:30.570871   59445 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:30.570896   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Closing plugin on server side
	I0416 17:57:30.570902   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Close
	I0416 17:57:30.570966   59445 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:30.570989   59445 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:30.571185   59445 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:30.571197   59445 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:30.594731   59445 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:30.594821   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Close
	I0416 17:57:30.595255   59445 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:30.595276   59445 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:30.595278   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Closing plugin on server side
	I0416 17:57:30.955738   59445 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:30.955768   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Close
	I0416 17:57:30.957794   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) DBG | Closing plugin on server side
	I0416 17:57:30.957809   59445 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:30.957886   59445 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:30.957904   59445 main.go:141] libmachine: Making call to close driver server
	I0416 17:57:30.957919   59445 main.go:141] libmachine: (default-k8s-diff-port-304316) Calling .Close
	I0416 17:57:30.958265   59445 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:57:30.958283   59445 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:57:30.958296   59445 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-304316"
	I0416 17:57:30.959775   59445 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0416 17:57:30.960899   59445 addons.go:505] duration metric: took 1.495647462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0416 17:57:31.780165   59445 pod_ready.go:102] pod "coredns-76f75df574-2td7t" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:29.661367   64516 pod_ready.go:102] pod "coredns-76f75df574-ttf84" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:32.160052   64516 pod_ready.go:102] pod "coredns-76f75df574-ttf84" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:29.209969   65026 crio.go:462] duration metric: took 1.959296697s to copy over tarball
	I0416 17:57:29.210053   65026 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 17:57:32.297443   65026 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.087355903s)
	I0416 17:57:32.297472   65026 crio.go:469] duration metric: took 3.087468749s to extract the tarball
	I0416 17:57:32.297480   65026 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 17:57:32.340229   65026 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:57:32.395132   65026 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 17:57:32.395156   65026 cache_images.go:84] Images are preloaded, skipping loading
	I0416 17:57:32.395166   65026 kubeadm.go:928] updating node { 192.168.61.204 8443 v1.29.3 crio true true} ...
	I0416 17:57:32.395286   65026 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-726705 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0416 17:57:32.395370   65026 ssh_runner.go:195] Run: crio config
	I0416 17:57:32.461682   65026 cni.go:84] Creating CNI manager for "bridge"
	I0416 17:57:32.461705   65026 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 17:57:32.461724   65026 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.204 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-726705 NodeName:enable-default-cni-726705 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 17:57:32.461871   65026 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-726705"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 17:57:32.461935   65026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 17:57:32.476292   65026 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 17:57:32.476366   65026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 17:57:32.488135   65026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0416 17:57:32.509109   65026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 17:57:32.529130   65026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0416 17:57:32.553923   65026 ssh_runner.go:195] Run: grep 192.168.61.204	control-plane.minikube.internal$ /etc/hosts
	I0416 17:57:32.559004   65026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:57:32.578357   65026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:57:32.730488   65026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:57:32.751349   65026 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705 for IP: 192.168.61.204
	I0416 17:57:32.751371   65026 certs.go:194] generating shared ca certs ...
	I0416 17:57:32.751390   65026 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:32.751589   65026 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 17:57:32.751647   65026 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 17:57:32.751660   65026 certs.go:256] generating profile certs ...
	I0416 17:57:32.751731   65026 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.key
	I0416 17:57:32.751748   65026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt with IP's: []
	I0416 17:57:32.926406   65026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt ...
	I0416 17:57:32.926436   65026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: {Name:mk48274912509a36b6671d0d0397a079b4f99afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:32.926602   65026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.key ...
	I0416 17:57:32.926615   65026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.key: {Name:mk3a2587772b3a0a14cdd8dd1bb42b862b7a5117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:32.926697   65026 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.key.c0d02b1e
	I0416 17:57:32.926713   65026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.crt.c0d02b1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.204]
	I0416 17:57:33.046488   65026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.crt.c0d02b1e ...
	I0416 17:57:33.046517   65026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.crt.c0d02b1e: {Name:mkc52857901deaee56f27d669b79a763f2130255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:33.046686   65026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.key.c0d02b1e ...
	I0416 17:57:33.046699   65026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.key.c0d02b1e: {Name:mk37c535ce022e7a051efb4162d7484955d16f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:33.046775   65026 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.crt.c0d02b1e -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.crt
	I0416 17:57:33.046858   65026 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.key.c0d02b1e -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.key
	I0416 17:57:33.046913   65026 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.key
	I0416 17:57:33.046928   65026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.crt with IP's: []
	I0416 17:57:33.870731   59445 pod_ready.go:102] pod "coredns-76f75df574-2td7t" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:34.251234   59445 pod_ready.go:92] pod "coredns-76f75df574-2td7t" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:34.251256   59445 pod_ready.go:81] duration metric: took 4.509363708s for pod "coredns-76f75df574-2td7t" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.251264   59445 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-v6dwd" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.257627   59445 pod_ready.go:92] pod "coredns-76f75df574-v6dwd" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:34.257651   59445 pod_ready.go:81] duration metric: took 6.38064ms for pod "coredns-76f75df574-v6dwd" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.257659   59445 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.268066   59445 pod_ready.go:92] pod "etcd-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:34.268093   59445 pod_ready.go:81] duration metric: took 10.424755ms for pod "etcd-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.268104   59445 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.278876   59445 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:34.278898   59445 pod_ready.go:81] duration metric: took 10.786137ms for pod "kube-apiserver-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.278911   59445 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.290172   59445 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:34.290193   59445 pod_ready.go:81] duration metric: took 11.274021ms for pod "kube-controller-manager-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.290203   59445 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lg46q" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.691137   59445 pod_ready.go:92] pod "kube-proxy-lg46q" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:34.691179   59445 pod_ready.go:81] duration metric: took 400.962696ms for pod "kube-proxy-lg46q" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:34.691193   59445 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:35.090829   59445 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-304316" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:35.090855   59445 pod_ready.go:81] duration metric: took 399.653274ms for pod "kube-scheduler-default-k8s-diff-port-304316" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:35.090866   59445 pod_ready.go:38] duration metric: took 5.36489898s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:35.090885   59445 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:57:35.090946   59445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:57:35.109770   59445 api_server.go:72] duration metric: took 5.644764318s to wait for apiserver process to appear ...
	I0416 17:57:35.109797   59445 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:57:35.109824   59445 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8444/healthz ...
	I0416 17:57:35.114947   59445 api_server.go:279] https://192.168.39.6:8444/healthz returned 200:
	ok
	I0416 17:57:35.116392   59445 api_server.go:141] control plane version: v1.29.3
	I0416 17:57:35.116415   59445 api_server.go:131] duration metric: took 6.6108ms to wait for apiserver health ...
	I0416 17:57:35.116424   59445 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:57:35.295358   59445 system_pods.go:59] 9 kube-system pods found
	I0416 17:57:35.295392   59445 system_pods.go:61] "coredns-76f75df574-2td7t" [01c407e1-20b0-4554-924e-08e9c1a6e71e] Running
	I0416 17:57:35.295399   59445 system_pods.go:61] "coredns-76f75df574-v6dwd" [c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094] Running
	I0416 17:57:35.295404   59445 system_pods.go:61] "etcd-default-k8s-diff-port-304316" [78e21a83-5970-4b27-9a81-4733cfbdb10d] Running
	I0416 17:57:35.295410   59445 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-304316" [2e746031-42ca-40ef-b4c0-e5dc87dd9592] Running
	I0416 17:57:35.295415   59445 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-304316" [7c988c09-ab3a-46e2-a12a-fe08d22f00c1] Running
	I0416 17:57:35.295419   59445 system_pods.go:61] "kube-proxy-lg46q" [8b3c5c13-25ef-4b45-854d-696e53410d7a] Running
	I0416 17:57:35.295422   59445 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-304316" [f59c7a42-3181-4202-8b3f-5539177d4449] Running
	I0416 17:57:35.295428   59445 system_pods.go:61] "metrics-server-57f55c9bc5-qv9w5" [07c1a75f-66de-4672-90ef-a5d837dc6632] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 17:57:35.295434   59445 system_pods.go:61] "storage-provisioner" [84e316ce-7709-4328-b30a-763f622a525c] Running
	I0416 17:57:35.295446   59445 system_pods.go:74] duration metric: took 179.014062ms to wait for pod list to return data ...
	I0416 17:57:35.295455   59445 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:57:35.489964   59445 default_sa.go:45] found service account: "default"
	I0416 17:57:35.489990   59445 default_sa.go:55] duration metric: took 194.528133ms for default service account to be created ...
	I0416 17:57:35.490000   59445 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:57:35.695680   59445 system_pods.go:86] 9 kube-system pods found
	I0416 17:57:35.695716   59445 system_pods.go:89] "coredns-76f75df574-2td7t" [01c407e1-20b0-4554-924e-08e9c1a6e71e] Running
	I0416 17:57:35.695724   59445 system_pods.go:89] "coredns-76f75df574-v6dwd" [c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094] Running
	I0416 17:57:35.695730   59445 system_pods.go:89] "etcd-default-k8s-diff-port-304316" [78e21a83-5970-4b27-9a81-4733cfbdb10d] Running
	I0416 17:57:35.695736   59445 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-304316" [2e746031-42ca-40ef-b4c0-e5dc87dd9592] Running
	I0416 17:57:35.695742   59445 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-304316" [7c988c09-ab3a-46e2-a12a-fe08d22f00c1] Running
	I0416 17:57:35.695747   59445 system_pods.go:89] "kube-proxy-lg46q" [8b3c5c13-25ef-4b45-854d-696e53410d7a] Running
	I0416 17:57:35.695753   59445 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-304316" [f59c7a42-3181-4202-8b3f-5539177d4449] Running
	I0416 17:57:35.695762   59445 system_pods.go:89] "metrics-server-57f55c9bc5-qv9w5" [07c1a75f-66de-4672-90ef-a5d837dc6632] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 17:57:35.695768   59445 system_pods.go:89] "storage-provisioner" [84e316ce-7709-4328-b30a-763f622a525c] Running
	I0416 17:57:35.695781   59445 system_pods.go:126] duration metric: took 205.774511ms to wait for k8s-apps to be running ...
	I0416 17:57:35.695791   59445 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:57:35.695839   59445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:57:35.717371   59445 system_svc.go:56] duration metric: took 21.569662ms WaitForService to wait for kubelet
	I0416 17:57:35.717410   59445 kubeadm.go:576] duration metric: took 6.252408841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:57:35.717437   59445 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:57:35.890588   59445 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:57:35.890608   59445 node_conditions.go:123] node cpu capacity is 2
	I0416 17:57:35.890617   59445 node_conditions.go:105] duration metric: took 173.175745ms to run NodePressure ...
	I0416 17:57:35.890627   59445 start.go:240] waiting for startup goroutines ...
	I0416 17:57:35.890634   59445 start.go:245] waiting for cluster config update ...
	I0416 17:57:35.890644   59445 start.go:254] writing updated cluster config ...
	I0416 17:57:35.890871   59445 ssh_runner.go:195] Run: rm -f paused
	I0416 17:57:35.942310   59445 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 17:57:35.944983   59445 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-304316" cluster and "default" namespace by default
	I0416 17:57:34.659444   64516 pod_ready.go:102] pod "coredns-76f75df574-ttf84" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:37.157105   64516 pod_ready.go:102] pod "coredns-76f75df574-ttf84" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:33.462095   65026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.crt ...
	I0416 17:57:33.462124   65026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.crt: {Name:mk2b74a34add404b754bc96756df887a68830420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:33.462311   65026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.key ...
	I0416 17:57:33.462328   65026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.key: {Name:mk82d70153b06fe6595eb6e1cd6bb890d515c2b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:57:33.462527   65026 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 17:57:33.462567   65026 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 17:57:33.462578   65026 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 17:57:33.462602   65026 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 17:57:33.462628   65026 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 17:57:33.462653   65026 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 17:57:33.462690   65026 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:57:33.463370   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 17:57:33.514704   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 17:57:33.560139   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 17:57:33.590019   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 17:57:33.618550   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 17:57:33.647140   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 17:57:33.676682   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 17:57:33.707844   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 17:57:33.737317   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 17:57:33.765427   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 17:57:33.794478   65026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 17:57:33.824908   65026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 17:57:33.845434   65026 ssh_runner.go:195] Run: openssl version
	I0416 17:57:33.852590   65026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 17:57:33.866964   65026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:33.873988   65026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:33.874067   65026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 17:57:33.882851   65026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 17:57:33.896637   65026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 17:57:33.911783   65026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 17:57:33.918224   65026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 17:57:33.918292   65026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 17:57:33.924891   65026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 17:57:33.939530   65026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 17:57:33.953355   65026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 17:57:33.959653   65026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 17:57:33.959715   65026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 17:57:33.966726   65026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 17:57:33.981502   65026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 17:57:33.986435   65026 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 17:57:33.986488   65026 kubeadm.go:391] StartCluster: {Name:enable-default-cni-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.3 ClusterName:enable-default-cni-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.204 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:57:33.986582   65026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 17:57:33.986636   65026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 17:57:34.034061   65026 cri.go:89] found id: ""
	I0416 17:57:34.034141   65026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 17:57:34.046496   65026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 17:57:34.060449   65026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 17:57:34.073415   65026 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 17:57:34.073444   65026 kubeadm.go:156] found existing configuration files:
	
	I0416 17:57:34.073501   65026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 17:57:34.085804   65026 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 17:57:34.085882   65026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 17:57:34.098380   65026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 17:57:34.109876   65026 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 17:57:34.109960   65026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 17:57:34.122002   65026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 17:57:34.133973   65026 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 17:57:34.134034   65026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 17:57:34.146349   65026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 17:57:34.161228   65026 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 17:57:34.161289   65026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 17:57:34.172032   65026 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 17:57:34.410393   65026 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 17:57:39.158661   64516 pod_ready.go:102] pod "coredns-76f75df574-ttf84" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:41.661133   64516 pod_ready.go:102] pod "coredns-76f75df574-ttf84" in "kube-system" namespace has status "Ready":"False"
	I0416 17:57:43.658034   64516 pod_ready.go:92] pod "coredns-76f75df574-ttf84" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:43.658056   64516 pod_ready.go:81] duration metric: took 16.008124609s for pod "coredns-76f75df574-ttf84" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.658064   64516 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.663916   64516 pod_ready.go:92] pod "etcd-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:43.663942   64516 pod_ready.go:81] duration metric: took 5.870265ms for pod "etcd-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.663954   64516 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.672944   64516 pod_ready.go:92] pod "kube-apiserver-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:43.672965   64516 pod_ready.go:81] duration metric: took 9.003601ms for pod "kube-apiserver-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.672975   64516 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.679074   64516 pod_ready.go:92] pod "kube-controller-manager-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:43.679102   64516 pod_ready.go:81] duration metric: took 6.118859ms for pod "kube-controller-manager-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.679114   64516 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-nndbn" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.685192   64516 pod_ready.go:92] pod "kube-proxy-nndbn" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:43.685219   64516 pod_ready.go:81] duration metric: took 6.096882ms for pod "kube-proxy-nndbn" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:43.685231   64516 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:44.054198   64516 pod_ready.go:92] pod "kube-scheduler-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:57:44.054224   64516 pod_ready.go:81] duration metric: took 368.98472ms for pod "kube-scheduler-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:57:44.054234   64516 pod_ready.go:38] duration metric: took 16.418261802s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:57:44.054247   64516 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:57:44.054300   64516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:57:44.075191   64516 api_server.go:72] duration metric: took 23.279250068s to wait for apiserver process to appear ...
	I0416 17:57:44.075224   64516 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:57:44.075247   64516 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0416 17:57:44.081519   64516 api_server.go:279] https://192.168.50.192:8443/healthz returned 200:
	ok
	I0416 17:57:44.082961   64516 api_server.go:141] control plane version: v1.29.3
	I0416 17:57:44.082982   64516 api_server.go:131] duration metric: took 7.750846ms to wait for apiserver health ...
	I0416 17:57:44.082989   64516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:57:44.259702   64516 system_pods.go:59] 7 kube-system pods found
	I0416 17:57:44.259732   64516 system_pods.go:61] "coredns-76f75df574-ttf84" [0754e643-7687-4063-a575-1ee156584720] Running
	I0416 17:57:44.259736   64516 system_pods.go:61] "etcd-flannel-726705" [2bb52199-2be6-4b9d-81ba-5abc853427cb] Running
	I0416 17:57:44.259740   64516 system_pods.go:61] "kube-apiserver-flannel-726705" [c841987f-1227-4919-940c-fda2382d022d] Running
	I0416 17:57:44.259744   64516 system_pods.go:61] "kube-controller-manager-flannel-726705" [1b18a871-f2d3-45f0-9108-99d9ebb34b52] Running
	I0416 17:57:44.259746   64516 system_pods.go:61] "kube-proxy-nndbn" [cbbb14ee-0059-4818-b1be-70c77aa5d03d] Running
	I0416 17:57:44.259749   64516 system_pods.go:61] "kube-scheduler-flannel-726705" [a0ec179a-d440-4dae-a727-f1785445abd7] Running
	I0416 17:57:44.259752   64516 system_pods.go:61] "storage-provisioner" [568b5d91-2a3b-4b15-92d5-76d8c57875ae] Running
	I0416 17:57:44.259757   64516 system_pods.go:74] duration metric: took 176.763076ms to wait for pod list to return data ...
	I0416 17:57:44.259765   64516 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:57:44.454691   64516 default_sa.go:45] found service account: "default"
	I0416 17:57:44.454720   64516 default_sa.go:55] duration metric: took 194.949151ms for default service account to be created ...
	I0416 17:57:44.454729   64516 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:57:44.663055   64516 system_pods.go:86] 7 kube-system pods found
	I0416 17:57:44.663082   64516 system_pods.go:89] "coredns-76f75df574-ttf84" [0754e643-7687-4063-a575-1ee156584720] Running
	I0416 17:57:44.663088   64516 system_pods.go:89] "etcd-flannel-726705" [2bb52199-2be6-4b9d-81ba-5abc853427cb] Running
	I0416 17:57:44.663092   64516 system_pods.go:89] "kube-apiserver-flannel-726705" [c841987f-1227-4919-940c-fda2382d022d] Running
	I0416 17:57:44.663096   64516 system_pods.go:89] "kube-controller-manager-flannel-726705" [1b18a871-f2d3-45f0-9108-99d9ebb34b52] Running
	I0416 17:57:44.663100   64516 system_pods.go:89] "kube-proxy-nndbn" [cbbb14ee-0059-4818-b1be-70c77aa5d03d] Running
	I0416 17:57:44.663105   64516 system_pods.go:89] "kube-scheduler-flannel-726705" [a0ec179a-d440-4dae-a727-f1785445abd7] Running
	I0416 17:57:44.663108   64516 system_pods.go:89] "storage-provisioner" [568b5d91-2a3b-4b15-92d5-76d8c57875ae] Running
	I0416 17:57:44.663115   64516 system_pods.go:126] duration metric: took 208.380841ms to wait for k8s-apps to be running ...
	I0416 17:57:44.663122   64516 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:57:44.663165   64516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:57:44.682581   64516 system_svc.go:56] duration metric: took 19.447253ms WaitForService to wait for kubelet
	I0416 17:57:44.682612   64516 kubeadm.go:576] duration metric: took 23.886675053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:57:44.682636   64516 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:57:44.854576   64516 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:57:44.854604   64516 node_conditions.go:123] node cpu capacity is 2
	I0416 17:57:44.854616   64516 node_conditions.go:105] duration metric: took 171.976143ms to run NodePressure ...
	I0416 17:57:44.854627   64516 start.go:240] waiting for startup goroutines ...
	I0416 17:57:44.854633   64516 start.go:245] waiting for cluster config update ...
	I0416 17:57:44.854642   64516 start.go:254] writing updated cluster config ...
	I0416 17:57:44.854899   64516 ssh_runner.go:195] Run: rm -f paused
	I0416 17:57:44.909078   64516 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 17:57:44.911942   64516 out.go:177] * Done! kubectl is now configured to use "flannel-726705" cluster and "default" namespace by default
	I0416 17:57:45.586489   65026 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 17:57:45.586552   65026 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 17:57:45.586647   65026 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 17:57:45.586762   65026 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 17:57:45.586905   65026 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 17:57:45.586992   65026 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 17:57:45.588445   65026 out.go:204]   - Generating certificates and keys ...
	I0416 17:57:45.588544   65026 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 17:57:45.588633   65026 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 17:57:45.588717   65026 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 17:57:45.588794   65026 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 17:57:45.588890   65026 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 17:57:45.588968   65026 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 17:57:45.589038   65026 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 17:57:45.589199   65026 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-726705 localhost] and IPs [192.168.61.204 127.0.0.1 ::1]
	I0416 17:57:45.589288   65026 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 17:57:45.589491   65026 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-726705 localhost] and IPs [192.168.61.204 127.0.0.1 ::1]
	I0416 17:57:45.589581   65026 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 17:57:45.589677   65026 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 17:57:45.589749   65026 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 17:57:45.589834   65026 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 17:57:45.589917   65026 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 17:57:45.590011   65026 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 17:57:45.590081   65026 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 17:57:45.590170   65026 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 17:57:45.590248   65026 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 17:57:45.590381   65026 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 17:57:45.590473   65026 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 17:57:45.592077   65026 out.go:204]   - Booting up control plane ...
	I0416 17:57:45.592245   65026 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 17:57:45.592344   65026 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 17:57:45.592450   65026 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 17:57:45.592572   65026 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 17:57:45.592696   65026 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 17:57:45.592751   65026 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 17:57:45.592968   65026 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 17:57:45.593080   65026 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502663 seconds
	I0416 17:57:45.593239   65026 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 17:57:45.593435   65026 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 17:57:45.593510   65026 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 17:57:45.593741   65026 kubeadm.go:309] [mark-control-plane] Marking the node enable-default-cni-726705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 17:57:45.593820   65026 kubeadm.go:309] [bootstrap-token] Using token: qqb9mz.bkf2pw9odd3w1ws0
	I0416 17:57:45.595226   65026 out.go:204]   - Configuring RBAC rules ...
	I0416 17:57:45.595364   65026 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 17:57:45.595472   65026 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 17:57:45.595648   65026 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 17:57:45.595752   65026 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 17:57:45.595853   65026 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 17:57:45.595927   65026 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 17:57:45.596076   65026 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 17:57:45.596134   65026 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 17:57:45.596197   65026 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 17:57:45.596207   65026 kubeadm.go:309] 
	I0416 17:57:45.596259   65026 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 17:57:45.596267   65026 kubeadm.go:309] 
	I0416 17:57:45.596356   65026 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 17:57:45.596367   65026 kubeadm.go:309] 
	I0416 17:57:45.596387   65026 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 17:57:45.596444   65026 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 17:57:45.596521   65026 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 17:57:45.596529   65026 kubeadm.go:309] 
	I0416 17:57:45.596612   65026 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 17:57:45.596623   65026 kubeadm.go:309] 
	I0416 17:57:45.596687   65026 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 17:57:45.596696   65026 kubeadm.go:309] 
	I0416 17:57:45.596774   65026 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 17:57:45.596906   65026 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 17:57:45.596986   65026 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 17:57:45.596996   65026 kubeadm.go:309] 
	I0416 17:57:45.597118   65026 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 17:57:45.597188   65026 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 17:57:45.597195   65026 kubeadm.go:309] 
	I0416 17:57:45.597291   65026 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qqb9mz.bkf2pw9odd3w1ws0 \
	I0416 17:57:45.597452   65026 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 17:57:45.597483   65026 kubeadm.go:309] 	--control-plane 
	I0416 17:57:45.597491   65026 kubeadm.go:309] 
	I0416 17:57:45.597600   65026 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 17:57:45.597611   65026 kubeadm.go:309] 
	I0416 17:57:45.597726   65026 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qqb9mz.bkf2pw9odd3w1ws0 \
	I0416 17:57:45.597863   65026 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 17:57:45.597889   65026 cni.go:84] Creating CNI manager for "bridge"
	I0416 17:57:45.600308   65026 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.626585068Z" level=debug msg="No credentials matching fake.domain/registry.k8s.io/echoserver found in /root/.dockercfg" file="config/config.go:846"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.626835785Z" level=debug msg="No credentials for fake.domain/registry.k8s.io/echoserver found" file="config/config.go:272"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.628049680Z" level=debug msg=" No signature storage configuration found for fake.domain/registry.k8s.io/echoserver:1.4, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.628137623Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/fake.domain" file="tlsclientconfig/tlsclientconfig.go:20"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.628235616Z" level=debug msg="GET https://fake.domain/v2/" file="docker/docker_client.go:631"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.632831846Z" level=debug msg="Ping https://fake.domain/v2/ err Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host (&url.Error{Op:\"Get\", URL:\"https://fake.domain/v2/\", Err:(*net.OpError)(0xc000904be0)})" file="docker/docker_client.go:897"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.632916281Z" level=debug msg="GET https://fake.domain/v1/_ping" file="docker/docker_client.go:631"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.633731782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fe833a7-1f29-44e9-99cd-8919ab5a7f93 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.633820625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fe833a7-1f29-44e9-99cd-8919ab5a7f93 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.635265020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d087663-cb91-4177-951a-6500a1dacc8a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.635837905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290266635815794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d087663-cb91-4177-951a-6500a1dacc8a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.636480115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aae4c334-471f-48c5-98b6-0f11340118b4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.636549203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aae4c334-471f-48c5-98b6-0f11340118b4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.636784766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7,PodSandboxId:974b8077bc711a4508d6720b7ef2a81cb611d918065baf9897d284bbde430407,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289311191093658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913ab65e-4692-43fe-9160-4680d40d45ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea9ed6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e,PodSandboxId:59378403dd979415b25c4d034f7b475f254b7bfc96466791ee420b471155465c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310465828038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mbxnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de57d75-6597-4fa8-bb38-f239a733477a,},Annotations:map[string]string{io.kubernetes.container.hash: aa6bebd1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0,PodSandboxId:7325ee824c71a22611e7526575d587155bbaf9fd7de8629d048b326fe93d050a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310254234646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-slfsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b3b48ec-1ccb-4587-b9a0-75d6244dd3cf,},Annotations:map[string]string{io.kubernetes.container.hash: be012b20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766,PodSandboxId:2c6508d1a8ff6929e82baa662d8e7dce78ae927adceaf5305c1d64dc7f73daa6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1713289309493497773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxdwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a03621-b707-49f1-a9f5-a8a3c73558eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d59805,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773,PodSandboxId:ac0cd97d0c3fe59b3d09a78d09401de6b06c6b288480a5b8658ffd2ed6ed157b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289289849247903,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3094b63b6dd171a81c08f1af4f0f2593,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0,PodSandboxId:5d45bf703cee98e0182004b3c963f95f83000314d4650c785de6fd782a03ad6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289289847641241,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e8bacc4d98e0be0efa2f5fdaa22e7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097,PodSandboxId:bba65a63f6b06d90593cd0a518fa88866be2677f1d6605412fb0b22967fdd8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289289781406846,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b66a67eaa290f5599a2d92f87e20a156,},Annotations:map[string]string{io.kubernetes.container.hash: a119b7b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9,PodSandboxId:a8526582712ad2a7267ad5205b8ed1839b0a4dc25526dcac84d88e9ad222fa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289289728089604,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 038ac1f610eb129ba18a8faf62ee9d65,},Annotations:map[string]string{io.kubernetes.container.hash: 4ceeca6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aae4c334-471f-48c5-98b6-0f11340118b4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.637840665Z" level=debug msg="Ping https://fake.domain/v1/_ping err Get \"https://fake.domain/v1/_ping\": dial tcp: lookup fake.domain: no such host (&url.Error{Op:\"Get\", URL:\"https://fake.domain/v1/_ping\", Err:(*net.OpError)(0xc0008fb810)})" file="docker/docker_client.go:927"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.637932923Z" level=debug msg="Accessing \"fake.domain/registry.k8s.io/echoserver:1.4\" failed: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" file="docker/docker_image_src.go:95"
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.637992862Z" level=debug msg="Error preparing image fake.domain/registry.k8s.io/echoserver:1.4: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" file="server/image_pull.go:213" id=5a0873d0-ceb0-451d-86ea-487b94468eab name=/runtime.v1.ImageService/PullImage
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.638122329Z" level=debug msg="Response error: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" file="otel-collector/interceptors.go:71" id=5a0873d0-ceb0-451d-86ea-487b94468eab name=/runtime.v1.ImageService/PullImage
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.683376217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbf9c532-3002-49e3-ba07-fcca0564fc09 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.684078964Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbf9c532-3002-49e3-ba07-fcca0564fc09 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.685614993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd415101-d55c-463a-aa3f-27d23816465a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.686269845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290266686248434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd415101-d55c-463a-aa3f-27d23816465a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.687342193Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9327d89-a597-4033-9f18-9deaab5fe84c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.687425018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9327d89-a597-4033-9f18-9deaab5fe84c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:57:46 embed-certs-512869 crio[734]: time="2024-04-16 17:57:46.687614900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7,PodSandboxId:974b8077bc711a4508d6720b7ef2a81cb611d918065baf9897d284bbde430407,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289311191093658,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913ab65e-4692-43fe-9160-4680d40d45ea,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea9ed6d,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e,PodSandboxId:59378403dd979415b25c4d034f7b475f254b7bfc96466791ee420b471155465c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310465828038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mbxnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de57d75-6597-4fa8-bb38-f239a733477a,},Annotations:map[string]string{io.kubernetes.container.hash: aa6bebd1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0,PodSandboxId:7325ee824c71a22611e7526575d587155bbaf9fd7de8629d048b326fe93d050a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289310254234646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-slfsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b3b48ec-1ccb-4587-b9a0-75d6244dd3cf,},Annotations:map[string]string{io.kubernetes.container.hash: be012b20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766,PodSandboxId:2c6508d1a8ff6929e82baa662d8e7dce78ae927adceaf5305c1d64dc7f73daa6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1713289309493497773,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxdwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66a03621-b707-49f1-a9f5-a8a3c73558eb,},Annotations:map[string]string{io.kubernetes.container.hash: 5d59805,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773,PodSandboxId:ac0cd97d0c3fe59b3d09a78d09401de6b06c6b288480a5b8658ffd2ed6ed157b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713289289849247903,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3094b63b6dd171a81c08f1af4f0f2593,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0,PodSandboxId:5d45bf703cee98e0182004b3c963f95f83000314d4650c785de6fd782a03ad6f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713289289847641241,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e8bacc4d98e0be0efa2f5fdaa22e7a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097,PodSandboxId:bba65a63f6b06d90593cd0a518fa88866be2677f1d6605412fb0b22967fdd8ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713289289781406846,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b66a67eaa290f5599a2d92f87e20a156,},Annotations:map[string]string{io.kubernetes.container.hash: a119b7b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9,PodSandboxId:a8526582712ad2a7267ad5205b8ed1839b0a4dc25526dcac84d88e9ad222fa4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289289728089604,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512869,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 038ac1f610eb129ba18a8faf62ee9d65,},Annotations:map[string]string{io.kubernetes.container.hash: 4ceeca6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9327d89-a597-4033-9f18-9deaab5fe84c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac3befbcd4ab5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   974b8077bc711       storage-provisioner
	d1e915af924f9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   59378403dd979       coredns-76f75df574-mbxnj
	f1511852605ab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   7325ee824c71a       coredns-76f75df574-slfsc
	c8545f49aed2d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   15 minutes ago      Running             kube-proxy                0                   2c6508d1a8ff6       kube-proxy-vxdwg
	743f47d8b4985       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   16 minutes ago      Running             kube-scheduler            2                   ac0cd97d0c3fe       kube-scheduler-embed-certs-512869
	d5ec0d0568d7e       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   16 minutes ago      Running             kube-controller-manager   2                   5d45bf703cee9       kube-controller-manager-embed-certs-512869
	487358ff90e12       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   16 minutes ago      Running             kube-apiserver            2                   bba65a63f6b06       kube-apiserver-embed-certs-512869
	266da1a1d2146       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   a8526582712ad       etcd-embed-certs-512869
	
	
	==> coredns [d1e915af924f96433a47433525a85568e8338ecb63f016ab4a7294862eba0c2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f1511852605ab7cb89dfa103aeb494f0ed3c42c748ba88684bada5318f8770e0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-512869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-512869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=embed-certs-512869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_41_36_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:41:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-512869
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:57:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:57:16 +0000   Tue, 16 Apr 2024 17:41:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:57:16 +0000   Tue, 16 Apr 2024 17:41:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:57:16 +0000   Tue, 16 Apr 2024 17:41:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:57:16 +0000   Tue, 16 Apr 2024 17:41:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.141
	  Hostname:    embed-certs-512869
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b4244c76c9b420393249cd324acac50
	  System UUID:                0b4244c7-6c9b-4203-9324-9cd324acac50
	  Boot ID:                    18a76deb-aaf0-4212-b1c0-17d786568f1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-mbxnj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-76f75df574-slfsc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-512869                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-512869             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-512869    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-vxdwg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-512869             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-bgdrb               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-512869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-512869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-512869 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-512869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-512869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-512869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-512869 event: Registered Node embed-certs-512869 in Controller
	
	
	==> dmesg <==
	[  +0.052379] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043590] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.593160] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.399498] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.694225] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.716579] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.058826] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064987] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.226129] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.141948] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +0.328517] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +5.119818] systemd-fstab-generator[819]: Ignoring "noauto" option for root device
	[  +0.060448] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.056514] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +5.618353] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.961939] kauditd_printk_skb: 84 callbacks suppressed
	[Apr16 17:41] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.914775] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +4.733151] kauditd_printk_skb: 57 callbacks suppressed
	[  +3.090622] systemd-fstab-generator[3984]: Ignoring "noauto" option for root device
	[ +12.458663] systemd-fstab-generator[4174]: Ignoring "noauto" option for root device
	[  +0.139584] kauditd_printk_skb: 14 callbacks suppressed
	[Apr16 17:42] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [266da1a1d2146172fc98ad3dd21efbf921afd6a12eaaf576e7076a8899ee51c9] <==
	{"level":"info","ts":"2024-04-16T17:51:30.934215Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":716}
	{"level":"info","ts":"2024-04-16T17:51:30.94468Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":716,"took":"9.955709ms","hash":980955426,"current-db-size-bytes":2314240,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2314240,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-16T17:51:30.944749Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":980955426,"revision":716,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:52:09.474662Z","caller":"traceutil/trace.go:171","msg":"trace[1785648362] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"578.422559ms","start":"2024-04-16T17:52:08.896197Z","end":"2024-04-16T17:52:09.474619Z","steps":["trace[1785648362] 'process raft request'  (duration: 578.295125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:52:09.476344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:52:08.896181Z","time spent":"578.958871ms","remote":"127.0.0.1:60392","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:991 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-16T17:52:09.709751Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.942734ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18105190579064365091 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-512869\" mod_revision:984 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-512869\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-512869\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-16T17:52:09.709888Z","caller":"traceutil/trace.go:171","msg":"trace[436177526] linearizableReadLoop","detail":"{readStateIndex:1135; appliedIndex:1134; }","duration":"690.910211ms","start":"2024-04-16T17:52:09.018968Z","end":"2024-04-16T17:52:09.709878Z","steps":["trace[436177526] 'read index received'  (duration: 455.996305ms)","trace[436177526] 'applied index is now lower than readState.Index'  (duration: 234.912903ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:52:09.710349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"514.263858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:52:09.710442Z","caller":"traceutil/trace.go:171","msg":"trace[1692622747] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:993; }","duration":"514.44043ms","start":"2024-04-16T17:52:09.195973Z","end":"2024-04-16T17:52:09.710413Z","steps":["trace[1692622747] 'agreement among raft nodes before linearized reading'  (duration: 514.221517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:52:09.710481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:52:09.195926Z","time spent":"514.543394ms","remote":"127.0.0.1:60406","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-04-16T17:52:09.71063Z","caller":"traceutil/trace.go:171","msg":"trace[1588486538] transaction","detail":"{read_only:false; response_revision:993; number_of_response:1; }","duration":"710.479327ms","start":"2024-04-16T17:52:09.000134Z","end":"2024-04-16T17:52:09.710614Z","steps":["trace[1588486538] 'process raft request'  (duration: 590.416986ms)","trace[1588486538] 'compare'  (duration: 118.858965ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:52:09.710678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"691.736041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:52:09.71075Z","caller":"traceutil/trace.go:171","msg":"trace[1688593233] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:993; }","duration":"691.832601ms","start":"2024-04-16T17:52:09.018907Z","end":"2024-04-16T17:52:09.71074Z","steps":["trace[1688593233] 'agreement among raft nodes before linearized reading'  (duration: 691.740892ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:52:09.71082Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:52:09.018829Z","time spent":"691.983386ms","remote":"127.0.0.1:60208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-16T17:52:09.710884Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:52:09.000107Z","time spent":"710.604401ms","remote":"127.0.0.1:60500","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-512869\" mod_revision:984 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-512869\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-512869\" > >"}
	{"level":"warn","ts":"2024-04-16T17:52:10.176172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.230166ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18105190579064365095 > lease_revoke:<id:7b428ee7ffe15fd8>","response":"size:28"}
	{"level":"info","ts":"2024-04-16T17:52:10.587105Z","caller":"traceutil/trace.go:171","msg":"trace[1172050191] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"135.916144ms","start":"2024-04-16T17:52:10.451174Z","end":"2024-04-16T17:52:10.587091Z","steps":["trace[1172050191] 'process raft request'  (duration: 135.659288ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:54:38.796666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.311247ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18105190579064365823 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qtzzxwu72g47uwacf6u7rnocxu\" mod_revision:1105 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qtzzxwu72g47uwacf6u7rnocxu\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qtzzxwu72g47uwacf6u7rnocxu\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-16T17:54:38.796852Z","caller":"traceutil/trace.go:171","msg":"trace[116587622] transaction","detail":"{read_only:false; response_revision:1114; number_of_response:1; }","duration":"312.885372ms","start":"2024-04-16T17:54:38.48393Z","end":"2024-04-16T17:54:38.796815Z","steps":["trace[116587622] 'process raft request'  (duration: 148.282493ms)","trace[116587622] 'compare'  (duration: 164.048402ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:54:38.796908Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:54:38.483911Z","time spent":"312.971306ms","remote":"127.0.0.1:60500","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qtzzxwu72g47uwacf6u7rnocxu\" mod_revision:1105 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qtzzxwu72g47uwacf6u7rnocxu\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qtzzxwu72g47uwacf6u7rnocxu\" > >"}
	{"level":"info","ts":"2024-04-16T17:56:30.942838Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":960}
	{"level":"info","ts":"2024-04-16T17:56:30.948435Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":960,"took":"4.537891ms","hash":2139909631,"current-db-size-bytes":2314240,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-16T17:56:30.948549Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2139909631,"revision":960,"compact-revision":716}
	{"level":"info","ts":"2024-04-16T17:56:54.959185Z","caller":"traceutil/trace.go:171","msg":"trace[365692668] transaction","detail":"{read_only:false; response_revision:1223; number_of_response:1; }","duration":"120.513976ms","start":"2024-04-16T17:56:54.838621Z","end":"2024-04-16T17:56:54.959135Z","steps":["trace[365692668] 'process raft request'  (duration: 85.937392ms)","trace[365692668] 'compare'  (duration: 34.333165ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:56:55.485458Z","caller":"traceutil/trace.go:171","msg":"trace[2076799958] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"128.34948ms","start":"2024-04-16T17:56:55.357086Z","end":"2024-04-16T17:56:55.485435Z","steps":["trace[2076799958] 'process raft request'  (duration: 128.1237ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:57:47 up 21 min,  0 users,  load average: 0.12, 0.15, 0.16
	Linux embed-certs-512869 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [487358ff90e129ea02ae984bc0a49498b5070b92f33f33e6292a9ff8894e8097] <==
	I0416 17:52:33.778394       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:54:33.777079       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:54:33.777598       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:54:33.777670       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:54:33.778590       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:54:33.778723       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:54:33.778765       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:56:32.782005       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:56:32.782516       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 17:56:33.783139       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:56:33.783237       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:56:33.783264       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:56:33.783507       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:56:33.783657       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:56:33.784905       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:57:33.783912       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:57:33.784226       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:57:33.784269       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:57:33.786173       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:57:33.786385       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:57:33.786435       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d5ec0d0568d7e2360251eeeed4e659d5c69444e0baa3c948ebd3a1a0ada9a8c0] <==
	I0416 17:51:48.596714       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:52:18.084577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:52:18.609730       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:52:48.091825       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:52:48.624527       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 17:52:54.638378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="266.737µs"
	I0416 17:53:05.631797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="173.062µs"
	E0416 17:53:18.097258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:53:18.633509       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:53:48.103482       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:53:48.643757       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:54:18.110508       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:54:18.654681       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:54:48.118250       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:54:48.665868       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:55:18.124731       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:55:18.674429       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:55:48.132092       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:55:48.684031       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:56:18.138223       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:56:18.693858       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:56:48.145599       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:56:48.702957       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:57:18.150662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:57:18.713371       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c8545f49aed2de0c5e35d3ebd149a48bcf7ea053cddfe13f1dd6195164255766] <==
	I0416 17:41:49.808345       1 server_others.go:72] "Using iptables proxy"
	I0416 17:41:49.831670       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.83.141"]
	I0416 17:41:49.937528       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:41:49.937581       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:41:49.937599       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:41:49.946025       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:41:49.946229       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:41:49.946270       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:41:49.947950       1 config.go:188] "Starting service config controller"
	I0416 17:41:49.947994       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:41:49.948018       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:41:49.948022       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:41:49.948508       1 config.go:315] "Starting node config controller"
	I0416 17:41:49.948539       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:41:50.049531       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:41:50.049579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:41:50.052987       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [743f47d8b49851e6d029d1d5380f371249090d8a90dce3827d960c04685bb773] <==
	W0416 17:41:33.745563       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:41:33.745636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:41:33.756762       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:41:33.756827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:41:33.809817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:41:33.810071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:41:33.930731       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:41:33.930785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:41:33.945451       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:41:33.945519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:41:33.962450       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:41:33.962562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:41:34.044183       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:41:34.044341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:41:34.062623       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 17:41:34.062726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 17:41:34.078423       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:41:34.078551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 17:41:34.086573       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:41:34.086632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:41:34.097948       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:41:34.098008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:41:34.339538       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:41:34.340206       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 17:41:37.387346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:55:36 embed-certs-512869 kubelet[3991]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:55:36 embed-certs-512869 kubelet[3991]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:55:46 embed-certs-512869 kubelet[3991]: E0416 17:55:46.618157    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:56:00 embed-certs-512869 kubelet[3991]: E0416 17:56:00.617259    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:56:12 embed-certs-512869 kubelet[3991]: E0416 17:56:12.617059    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:56:24 embed-certs-512869 kubelet[3991]: E0416 17:56:24.617237    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:56:36 embed-certs-512869 kubelet[3991]: E0416 17:56:36.667951    3991 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:56:36 embed-certs-512869 kubelet[3991]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:56:36 embed-certs-512869 kubelet[3991]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:56:36 embed-certs-512869 kubelet[3991]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:56:36 embed-certs-512869 kubelet[3991]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:56:38 embed-certs-512869 kubelet[3991]: E0416 17:56:38.616768    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:56:52 embed-certs-512869 kubelet[3991]: E0416 17:56:52.617617    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:57:04 embed-certs-512869 kubelet[3991]: E0416 17:57:04.616785    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:57:17 embed-certs-512869 kubelet[3991]: E0416 17:57:17.617625    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:57:31 embed-certs-512869 kubelet[3991]: E0416 17:57:31.617456    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	Apr 16 17:57:36 embed-certs-512869 kubelet[3991]: E0416 17:57:36.668056    3991 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 17:57:36 embed-certs-512869 kubelet[3991]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:57:36 embed-certs-512869 kubelet[3991]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:57:36 embed-certs-512869 kubelet[3991]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:57:36 embed-certs-512869 kubelet[3991]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:57:46 embed-certs-512869 kubelet[3991]: E0416 17:57:46.638530    3991 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 16 17:57:46 embed-certs-512869 kubelet[3991]: E0416 17:57:46.638583    3991 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 16 17:57:46 embed-certs-512869 kubelet[3991]: E0416 17:57:46.638876    3991 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9gjzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-bgdrb_kube-system(a14c8752-876d-4036-be19-bf5fd52bda61): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 16 17:57:46 embed-certs-512869 kubelet[3991]: E0416 17:57:46.638925    3991 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-bgdrb" podUID="a14c8752-876d-4036-be19-bf5fd52bda61"
	
	
	==> storage-provisioner [ac3befbcd4ab5383cd75068e9221bffe0cd5751ef72ce03cee4e3e5a8bf9bfa7] <==
	I0416 17:41:51.313887       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 17:41:51.330993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 17:41:51.331099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 17:41:51.341374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 17:41:51.341843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-512869_1ef5bb23-9a50-4811-83a2-dc154541d23f!
	I0416 17:41:51.344234       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9aaf55c9-f9f7-4b96-af6c-5ba966ba2d38", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-512869_1ef5bb23-9a50-4811-83a2-dc154541d23f became leader
	I0416 17:41:51.442730       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-512869_1ef5bb23-9a50-4811-83a2-dc154541d23f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512869 -n embed-certs-512869
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-512869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bgdrb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-512869 describe pod metrics-server-57f55c9bc5-bgdrb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-512869 describe pod metrics-server-57f55c9bc5-bgdrb: exit status 1 (66.631511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bgdrb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-512869 describe pod metrics-server-57f55c9bc5-bgdrb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (413.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (356.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368813 -n no-preload-368813
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-16 17:56:54.521676461 +0000 UTC m=+5849.920352983
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-368813 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-368813 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.957µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-368813 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-368813 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-368813 logs -n 25: (1.968466149s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | status kubelet --all --full                          |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat kubelet --no-pager                               |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo journalctl                       | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | -xeu kubelet --all --full                            |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | status docker --all --full                           |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat docker --no-pager                                |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/docker/daemon.json                              |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo docker                           | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | system info                                          |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat cri-docker --no-pager                            |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo                                  | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cri-dockerd --version                                |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | status containerd --all --full                       |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat containerd --no-pager                            |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo cat                              | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/containerd/config.toml                          |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo containerd                       | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | config dump                                          |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | status crio --all --full                             |                |         |                |                     |                     |
	|         | --no-pager                                           |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo systemctl                        | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | cat crio --no-pager                                  |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo find                             | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |                |                     |                     |
	| ssh     | -p auto-726705 sudo crio                             | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	|         | config                                               |                |         |                |                     |                     |
	| delete  | -p auto-726705                                       | auto-726705    | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC | 16 Apr 24 17:56 UTC |
	| start   | -p flannel-726705                                    | flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 17:56 UTC |                     |
	|         | --memory=3072                                        |                |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |                |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                |         |                |                     |                     |
	|         | --container-runtime=crio                             |                |         |                |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:56:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:56:22.847442   64516 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:56:22.847634   64516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:56:22.847649   64516 out.go:304] Setting ErrFile to fd 2...
	I0416 17:56:22.847656   64516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:56:22.848084   64516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:56:22.848755   64516 out.go:298] Setting JSON to false
	I0416 17:56:22.849860   64516 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5935,"bootTime":1713284248,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:56:22.849923   64516 start.go:139] virtualization: kvm guest
	I0416 17:56:22.852374   64516 out.go:177] * [flannel-726705] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:56:22.853876   64516 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:56:22.855257   64516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:56:22.853875   64516 notify.go:220] Checking for updates...
	I0416 17:56:22.857728   64516 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:56:22.859146   64516 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:56:22.860578   64516 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:56:22.861927   64516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:56:22.863572   64516 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:56:22.863662   64516 config.go:182] Loaded profile config "embed-certs-512869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:56:22.863750   64516 config.go:182] Loaded profile config "no-preload-368813": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 17:56:22.863827   64516 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:56:22.900293   64516 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:56:22.901520   64516 start.go:297] selected driver: kvm2
	I0416 17:56:22.901541   64516 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:56:22.901562   64516 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:56:22.902448   64516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:56:22.902533   64516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:56:22.917664   64516 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:56:22.917704   64516 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:56:22.917901   64516 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:56:22.917960   64516 cni.go:84] Creating CNI manager for "flannel"
	I0416 17:56:22.917970   64516 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0416 17:56:22.918034   64516 start.go:340] cluster config:
	{Name:flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:56:22.918140   64516 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:56:22.919863   64516 out.go:177] * Starting "flannel-726705" primary control-plane node in "flannel-726705" cluster
	I0416 17:56:22.921075   64516 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:56:22.921110   64516 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:56:22.921128   64516 cache.go:56] Caching tarball of preloaded images
	I0416 17:56:22.921224   64516 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:56:22.921235   64516 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:56:22.921347   64516 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/config.json ...
	I0416 17:56:22.921372   64516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/config.json: {Name:mk817d0ceb012fd224a9286f0d9be803c424abe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:56:22.921525   64516 start.go:360] acquireMachinesLock for flannel-726705: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:56:22.921563   64516 start.go:364] duration metric: took 19.993µs to acquireMachinesLock for "flannel-726705"
	I0416 17:56:22.921581   64516 start.go:93] Provisioning new machine with config: &{Name:flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:56:22.921666   64516 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 17:56:22.053977   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:56:24.549923   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:56:26.550084   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:56:22.923709   64516 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0416 17:56:22.923960   64516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:56:22.924018   64516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:56:22.938877   64516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37065
	I0416 17:56:22.939366   64516 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:56:22.939974   64516 main.go:141] libmachine: Using API Version  1
	I0416 17:56:22.940001   64516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:56:22.940360   64516 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:56:22.940627   64516 main.go:141] libmachine: (flannel-726705) Calling .GetMachineName
	I0416 17:56:22.940854   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:22.940987   64516 start.go:159] libmachine.API.Create for "flannel-726705" (driver="kvm2")
	I0416 17:56:22.941021   64516 client.go:168] LocalClient.Create starting
	I0416 17:56:22.941068   64516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 17:56:22.941106   64516 main.go:141] libmachine: Decoding PEM data...
	I0416 17:56:22.941148   64516 main.go:141] libmachine: Parsing certificate...
	I0416 17:56:22.941214   64516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 17:56:22.941249   64516 main.go:141] libmachine: Decoding PEM data...
	I0416 17:56:22.941270   64516 main.go:141] libmachine: Parsing certificate...
	I0416 17:56:22.941297   64516 main.go:141] libmachine: Running pre-create checks...
	I0416 17:56:22.941310   64516 main.go:141] libmachine: (flannel-726705) Calling .PreCreateCheck
	I0416 17:56:22.941672   64516 main.go:141] libmachine: (flannel-726705) Calling .GetConfigRaw
	I0416 17:56:22.942032   64516 main.go:141] libmachine: Creating machine...
	I0416 17:56:22.942049   64516 main.go:141] libmachine: (flannel-726705) Calling .Create
	I0416 17:56:22.942305   64516 main.go:141] libmachine: (flannel-726705) Creating KVM machine...
	I0416 17:56:22.943472   64516 main.go:141] libmachine: (flannel-726705) DBG | found existing default KVM network
	I0416 17:56:22.944693   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:22.944545   64539 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:c7:a6} reservation:<nil>}
	I0416 17:56:22.945757   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:22.945673   64539 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028aac0}
	I0416 17:56:22.945778   64516 main.go:141] libmachine: (flannel-726705) DBG | created network xml: 
	I0416 17:56:22.945791   64516 main.go:141] libmachine: (flannel-726705) DBG | <network>
	I0416 17:56:22.945805   64516 main.go:141] libmachine: (flannel-726705) DBG |   <name>mk-flannel-726705</name>
	I0416 17:56:22.945894   64516 main.go:141] libmachine: (flannel-726705) DBG |   <dns enable='no'/>
	I0416 17:56:22.945925   64516 main.go:141] libmachine: (flannel-726705) DBG |   
	I0416 17:56:22.945936   64516 main.go:141] libmachine: (flannel-726705) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0416 17:56:22.945945   64516 main.go:141] libmachine: (flannel-726705) DBG |     <dhcp>
	I0416 17:56:22.945958   64516 main.go:141] libmachine: (flannel-726705) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0416 17:56:22.945977   64516 main.go:141] libmachine: (flannel-726705) DBG |     </dhcp>
	I0416 17:56:22.945989   64516 main.go:141] libmachine: (flannel-726705) DBG |   </ip>
	I0416 17:56:22.946003   64516 main.go:141] libmachine: (flannel-726705) DBG |   
	I0416 17:56:22.946015   64516 main.go:141] libmachine: (flannel-726705) DBG | </network>
	I0416 17:56:22.946025   64516 main.go:141] libmachine: (flannel-726705) DBG | 
	I0416 17:56:22.951388   64516 main.go:141] libmachine: (flannel-726705) DBG | trying to create private KVM network mk-flannel-726705 192.168.50.0/24...
	I0416 17:56:23.021463   64516 main.go:141] libmachine: (flannel-726705) DBG | private KVM network mk-flannel-726705 192.168.50.0/24 created
	I0416 17:56:23.021493   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:23.021420   64539 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:56:23.021506   64516 main.go:141] libmachine: (flannel-726705) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705 ...
	I0416 17:56:23.021523   64516 main.go:141] libmachine: (flannel-726705) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 17:56:23.021654   64516 main.go:141] libmachine: (flannel-726705) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:56:23.251407   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:23.251297   64539 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa...
	I0416 17:56:23.402350   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:23.402229   64539 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/flannel-726705.rawdisk...
	I0416 17:56:23.402380   64516 main.go:141] libmachine: (flannel-726705) DBG | Writing magic tar header
	I0416 17:56:23.402390   64516 main.go:141] libmachine: (flannel-726705) DBG | Writing SSH key tar header
	I0416 17:56:23.402398   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:23.402345   64539 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705 ...
	I0416 17:56:23.402458   64516 main.go:141] libmachine: (flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705
	I0416 17:56:23.402469   64516 main.go:141] libmachine: (flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 17:56:23.402478   64516 main.go:141] libmachine: (flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705 (perms=drwx------)
	I0416 17:56:23.402488   64516 main.go:141] libmachine: (flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 17:56:23.402502   64516 main.go:141] libmachine: (flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 17:56:23.402520   64516 main.go:141] libmachine: (flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 17:56:23.402539   64516 main.go:141] libmachine: (flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:56:23.402550   64516 main.go:141] libmachine: (flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 17:56:23.402563   64516 main.go:141] libmachine: (flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 17:56:23.402586   64516 main.go:141] libmachine: (flannel-726705) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 17:56:23.402599   64516 main.go:141] libmachine: (flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 17:56:23.402609   64516 main.go:141] libmachine: (flannel-726705) DBG | Checking permissions on dir: /home/jenkins
	I0416 17:56:23.402614   64516 main.go:141] libmachine: (flannel-726705) DBG | Checking permissions on dir: /home
	I0416 17:56:23.402621   64516 main.go:141] libmachine: (flannel-726705) DBG | Skipping /home - not owner
	I0416 17:56:23.402632   64516 main.go:141] libmachine: (flannel-726705) Creating domain...
	I0416 17:56:23.403857   64516 main.go:141] libmachine: (flannel-726705) define libvirt domain using xml: 
	I0416 17:56:23.403880   64516 main.go:141] libmachine: (flannel-726705) <domain type='kvm'>
	I0416 17:56:23.403891   64516 main.go:141] libmachine: (flannel-726705)   <name>flannel-726705</name>
	I0416 17:56:23.403899   64516 main.go:141] libmachine: (flannel-726705)   <memory unit='MiB'>3072</memory>
	I0416 17:56:23.403907   64516 main.go:141] libmachine: (flannel-726705)   <vcpu>2</vcpu>
	I0416 17:56:23.403914   64516 main.go:141] libmachine: (flannel-726705)   <features>
	I0416 17:56:23.403921   64516 main.go:141] libmachine: (flannel-726705)     <acpi/>
	I0416 17:56:23.403926   64516 main.go:141] libmachine: (flannel-726705)     <apic/>
	I0416 17:56:23.403943   64516 main.go:141] libmachine: (flannel-726705)     <pae/>
	I0416 17:56:23.403953   64516 main.go:141] libmachine: (flannel-726705)     
	I0416 17:56:23.403958   64516 main.go:141] libmachine: (flannel-726705)   </features>
	I0416 17:56:23.403963   64516 main.go:141] libmachine: (flannel-726705)   <cpu mode='host-passthrough'>
	I0416 17:56:23.403968   64516 main.go:141] libmachine: (flannel-726705)   
	I0416 17:56:23.403974   64516 main.go:141] libmachine: (flannel-726705)   </cpu>
	I0416 17:56:23.403978   64516 main.go:141] libmachine: (flannel-726705)   <os>
	I0416 17:56:23.403986   64516 main.go:141] libmachine: (flannel-726705)     <type>hvm</type>
	I0416 17:56:23.403991   64516 main.go:141] libmachine: (flannel-726705)     <boot dev='cdrom'/>
	I0416 17:56:23.403996   64516 main.go:141] libmachine: (flannel-726705)     <boot dev='hd'/>
	I0416 17:56:23.404002   64516 main.go:141] libmachine: (flannel-726705)     <bootmenu enable='no'/>
	I0416 17:56:23.404006   64516 main.go:141] libmachine: (flannel-726705)   </os>
	I0416 17:56:23.404018   64516 main.go:141] libmachine: (flannel-726705)   <devices>
	I0416 17:56:23.404025   64516 main.go:141] libmachine: (flannel-726705)     <disk type='file' device='cdrom'>
	I0416 17:56:23.404033   64516 main.go:141] libmachine: (flannel-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/boot2docker.iso'/>
	I0416 17:56:23.404048   64516 main.go:141] libmachine: (flannel-726705)       <target dev='hdc' bus='scsi'/>
	I0416 17:56:23.404056   64516 main.go:141] libmachine: (flannel-726705)       <readonly/>
	I0416 17:56:23.404060   64516 main.go:141] libmachine: (flannel-726705)     </disk>
	I0416 17:56:23.404068   64516 main.go:141] libmachine: (flannel-726705)     <disk type='file' device='disk'>
	I0416 17:56:23.404075   64516 main.go:141] libmachine: (flannel-726705)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 17:56:23.404086   64516 main.go:141] libmachine: (flannel-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/flannel-726705.rawdisk'/>
	I0416 17:56:23.404093   64516 main.go:141] libmachine: (flannel-726705)       <target dev='hda' bus='virtio'/>
	I0416 17:56:23.404098   64516 main.go:141] libmachine: (flannel-726705)     </disk>
	I0416 17:56:23.404102   64516 main.go:141] libmachine: (flannel-726705)     <interface type='network'>
	I0416 17:56:23.404111   64516 main.go:141] libmachine: (flannel-726705)       <source network='mk-flannel-726705'/>
	I0416 17:56:23.404116   64516 main.go:141] libmachine: (flannel-726705)       <model type='virtio'/>
	I0416 17:56:23.404124   64516 main.go:141] libmachine: (flannel-726705)     </interface>
	I0416 17:56:23.404128   64516 main.go:141] libmachine: (flannel-726705)     <interface type='network'>
	I0416 17:56:23.404137   64516 main.go:141] libmachine: (flannel-726705)       <source network='default'/>
	I0416 17:56:23.404141   64516 main.go:141] libmachine: (flannel-726705)       <model type='virtio'/>
	I0416 17:56:23.404147   64516 main.go:141] libmachine: (flannel-726705)     </interface>
	I0416 17:56:23.404152   64516 main.go:141] libmachine: (flannel-726705)     <serial type='pty'>
	I0416 17:56:23.404157   64516 main.go:141] libmachine: (flannel-726705)       <target port='0'/>
	I0416 17:56:23.404163   64516 main.go:141] libmachine: (flannel-726705)     </serial>
	I0416 17:56:23.404169   64516 main.go:141] libmachine: (flannel-726705)     <console type='pty'>
	I0416 17:56:23.404176   64516 main.go:141] libmachine: (flannel-726705)       <target type='serial' port='0'/>
	I0416 17:56:23.404181   64516 main.go:141] libmachine: (flannel-726705)     </console>
	I0416 17:56:23.404188   64516 main.go:141] libmachine: (flannel-726705)     <rng model='virtio'>
	I0416 17:56:23.404194   64516 main.go:141] libmachine: (flannel-726705)       <backend model='random'>/dev/random</backend>
	I0416 17:56:23.404201   64516 main.go:141] libmachine: (flannel-726705)     </rng>
	I0416 17:56:23.404205   64516 main.go:141] libmachine: (flannel-726705)     
	I0416 17:56:23.404212   64516 main.go:141] libmachine: (flannel-726705)     
	I0416 17:56:23.404217   64516 main.go:141] libmachine: (flannel-726705)   </devices>
	I0416 17:56:23.404224   64516 main.go:141] libmachine: (flannel-726705) </domain>
	I0416 17:56:23.404231   64516 main.go:141] libmachine: (flannel-726705) 
	I0416 17:56:23.408653   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:8a:2e:4b in network default
	I0416 17:56:23.409385   64516 main.go:141] libmachine: (flannel-726705) Ensuring networks are active...
	I0416 17:56:23.409409   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:23.410166   64516 main.go:141] libmachine: (flannel-726705) Ensuring network default is active
	I0416 17:56:23.410522   64516 main.go:141] libmachine: (flannel-726705) Ensuring network mk-flannel-726705 is active
	I0416 17:56:23.411142   64516 main.go:141] libmachine: (flannel-726705) Getting domain xml...
	I0416 17:56:23.411847   64516 main.go:141] libmachine: (flannel-726705) Creating domain...
	I0416 17:56:24.681283   64516 main.go:141] libmachine: (flannel-726705) Waiting to get IP...
	I0416 17:56:24.682221   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:24.682756   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:24.682798   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:24.682721   64539 retry.go:31] will retry after 243.222463ms: waiting for machine to come up
	I0416 17:56:24.928953   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:24.929545   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:24.929570   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:24.929501   64539 retry.go:31] will retry after 253.908749ms: waiting for machine to come up
	I0416 17:56:25.185170   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:25.185745   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:25.185794   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:25.185691   64539 retry.go:31] will retry after 449.85342ms: waiting for machine to come up
	I0416 17:56:25.637492   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:25.638008   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:25.638036   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:25.637962   64539 retry.go:31] will retry after 574.911563ms: waiting for machine to come up
	I0416 17:56:26.214706   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:26.215243   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:26.215274   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:26.215199   64539 retry.go:31] will retry after 489.701332ms: waiting for machine to come up
	I0416 17:56:26.706546   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:26.707021   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:26.707053   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:26.706966   64539 retry.go:31] will retry after 853.902526ms: waiting for machine to come up
	I0416 17:56:27.561906   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:27.562366   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:27.562392   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:27.562345   64539 retry.go:31] will retry after 805.001691ms: waiting for machine to come up
	I0416 17:56:29.049484   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:56:31.050487   59445 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace has status "Ready":"False"
	I0416 17:56:31.542281   59445 pod_ready.go:81] duration metric: took 4m0.000291707s for pod "metrics-server-57f55c9bc5-rs6mm" in "kube-system" namespace to be "Ready" ...
	E0416 17:56:31.542319   59445 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0416 17:56:31.542343   59445 pod_ready.go:38] duration metric: took 4m13.611689965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:56:31.542375   59445 kubeadm.go:591] duration metric: took 4m22.048839445s to restartPrimaryControlPlane
	W0416 17:56:31.542435   59445 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 17:56:31.542473   59445 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 17:56:28.368897   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:28.369463   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:28.369499   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:28.369416   64539 retry.go:31] will retry after 1.101861501s: waiting for machine to come up
	I0416 17:56:29.472585   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:29.473070   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:29.473096   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:29.473024   64539 retry.go:31] will retry after 1.377333342s: waiting for machine to come up
	I0416 17:56:30.852552   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:30.853114   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:30.853144   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:30.853070   64539 retry.go:31] will retry after 1.897416458s: waiting for machine to come up
	I0416 17:56:32.752704   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:32.753307   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:32.753429   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:32.753356   64539 retry.go:31] will retry after 1.954209244s: waiting for machine to come up
	I0416 17:56:34.710181   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:34.710709   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:34.710737   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:34.710665   64539 retry.go:31] will retry after 2.567373309s: waiting for machine to come up
	I0416 17:56:37.279098   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:37.279549   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:37.279572   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:37.279506   64539 retry.go:31] will retry after 4.499315075s: waiting for machine to come up
	I0416 17:56:41.782244   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:41.782701   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find current IP address of domain flannel-726705 in network mk-flannel-726705
	I0416 17:56:41.782729   64516 main.go:141] libmachine: (flannel-726705) DBG | I0416 17:56:41.782652   64539 retry.go:31] will retry after 4.155986671s: waiting for machine to come up
	I0416 17:56:45.942587   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:45.943052   64516 main.go:141] libmachine: (flannel-726705) Found IP for machine: 192.168.50.192
	I0416 17:56:45.943066   64516 main.go:141] libmachine: (flannel-726705) Reserving static IP address...
	I0416 17:56:45.943076   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has current primary IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:45.943366   64516 main.go:141] libmachine: (flannel-726705) DBG | unable to find host DHCP lease matching {name: "flannel-726705", mac: "52:54:00:54:ef:4b", ip: "192.168.50.192"} in network mk-flannel-726705
	I0416 17:56:46.016174   64516 main.go:141] libmachine: (flannel-726705) DBG | Getting to WaitForSSH function...
	I0416 17:56:46.016209   64516 main.go:141] libmachine: (flannel-726705) Reserved static IP address: 192.168.50.192
	I0416 17:56:46.016224   64516 main.go:141] libmachine: (flannel-726705) Waiting for SSH to be available...
	I0416 17:56:46.018949   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.019389   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.019420   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.019498   64516 main.go:141] libmachine: (flannel-726705) DBG | Using SSH client type: external
	I0416 17:56:46.019519   64516 main.go:141] libmachine: (flannel-726705) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa (-rw-------)
	I0416 17:56:46.019553   64516 main.go:141] libmachine: (flannel-726705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 17:56:46.019568   64516 main.go:141] libmachine: (flannel-726705) DBG | About to run SSH command:
	I0416 17:56:46.019579   64516 main.go:141] libmachine: (flannel-726705) DBG | exit 0
	I0416 17:56:46.149171   64516 main.go:141] libmachine: (flannel-726705) DBG | SSH cmd err, output: <nil>: 
	I0416 17:56:46.149469   64516 main.go:141] libmachine: (flannel-726705) KVM machine creation complete!
	I0416 17:56:46.149831   64516 main.go:141] libmachine: (flannel-726705) Calling .GetConfigRaw
	I0416 17:56:46.150377   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:46.150598   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:46.150764   64516 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 17:56:46.150791   64516 main.go:141] libmachine: (flannel-726705) Calling .GetState
	I0416 17:56:46.152033   64516 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 17:56:46.152045   64516 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 17:56:46.152050   64516 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 17:56:46.152056   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:46.154473   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.154927   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.154952   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.155102   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:46.155323   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.155483   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.155653   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:46.155781   64516 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.156022   64516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0416 17:56:46.156039   64516 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 17:56:46.276560   64516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:56:46.276585   64516 main.go:141] libmachine: Detecting the provisioner...
	I0416 17:56:46.276593   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:46.279446   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.279791   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.279827   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.280024   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:46.280260   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.280455   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.280624   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:46.280865   64516 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.281064   64516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0416 17:56:46.281081   64516 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 17:56:46.402248   64516 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 17:56:46.402340   64516 main.go:141] libmachine: found compatible host: buildroot
	I0416 17:56:46.402356   64516 main.go:141] libmachine: Provisioning with buildroot...
	I0416 17:56:46.402366   64516 main.go:141] libmachine: (flannel-726705) Calling .GetMachineName
	I0416 17:56:46.402614   64516 buildroot.go:166] provisioning hostname "flannel-726705"
	I0416 17:56:46.402642   64516 main.go:141] libmachine: (flannel-726705) Calling .GetMachineName
	I0416 17:56:46.402847   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:46.405460   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.405903   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.405937   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.406147   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:46.406335   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.406503   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.406668   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:46.406859   64516 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.407050   64516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0416 17:56:46.407064   64516 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-726705 && echo "flannel-726705" | sudo tee /etc/hostname
	I0416 17:56:46.540698   64516 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-726705
	
	I0416 17:56:46.540732   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:46.543861   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.544304   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.544335   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.544506   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:46.544744   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.544927   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.545093   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:46.545265   64516 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.545444   64516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0416 17:56:46.545461   64516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-726705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-726705/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-726705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 17:56:46.671652   64516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 17:56:46.671682   64516 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 17:56:46.671704   64516 buildroot.go:174] setting up certificates
	I0416 17:56:46.671716   64516 provision.go:84] configureAuth start
	I0416 17:56:46.671727   64516 main.go:141] libmachine: (flannel-726705) Calling .GetMachineName
	I0416 17:56:46.671999   64516 main.go:141] libmachine: (flannel-726705) Calling .GetIP
	I0416 17:56:46.675025   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.675413   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.675454   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.675572   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:46.677976   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.678309   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.678340   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.678435   64516 provision.go:143] copyHostCerts
	I0416 17:56:46.678502   64516 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 17:56:46.678520   64516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 17:56:46.678595   64516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 17:56:46.678750   64516 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 17:56:46.678766   64516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 17:56:46.678804   64516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 17:56:46.678901   64516 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 17:56:46.678914   64516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 17:56:46.678953   64516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 17:56:46.679071   64516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.flannel-726705 san=[127.0.0.1 192.168.50.192 flannel-726705 localhost minikube]
	I0416 17:56:46.731931   64516 provision.go:177] copyRemoteCerts
	I0416 17:56:46.731981   64516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 17:56:46.732009   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:46.734799   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.735157   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.735202   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.735417   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:46.735640   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.735827   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:46.735988   64516 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa Username:docker}
	I0416 17:56:46.829599   64516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 17:56:46.859701   64516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0416 17:56:46.887562   64516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 17:56:46.918615   64516 provision.go:87] duration metric: took 246.88468ms to configureAuth
	I0416 17:56:46.918646   64516 buildroot.go:189] setting minikube options for container-runtime
	I0416 17:56:46.918863   64516 config.go:182] Loaded profile config "flannel-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:56:46.918963   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:46.921678   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.922025   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:46.922053   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:46.922194   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:46.922380   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.922585   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:46.922710   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:46.922905   64516 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:46.923098   64516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0416 17:56:46.923113   64516 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 17:56:47.229984   64516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 17:56:47.230021   64516 main.go:141] libmachine: Checking connection to Docker...
	I0416 17:56:47.230031   64516 main.go:141] libmachine: (flannel-726705) Calling .GetURL
	I0416 17:56:47.231485   64516 main.go:141] libmachine: (flannel-726705) DBG | Using libvirt version 6000000
	I0416 17:56:47.233936   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.234280   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:47.234329   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.234509   64516 main.go:141] libmachine: Docker is up and running!
	I0416 17:56:47.234522   64516 main.go:141] libmachine: Reticulating splines...
	I0416 17:56:47.234528   64516 client.go:171] duration metric: took 24.293495834s to LocalClient.Create
	I0416 17:56:47.234548   64516 start.go:167] duration metric: took 24.29356109s to libmachine.API.Create "flannel-726705"
	I0416 17:56:47.234557   64516 start.go:293] postStartSetup for "flannel-726705" (driver="kvm2")
	I0416 17:56:47.234566   64516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 17:56:47.234581   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:47.234837   64516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 17:56:47.234859   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:47.237231   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.237712   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:47.237738   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.237887   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:47.238056   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:47.238223   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:47.238436   64516 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa Username:docker}
	I0416 17:56:47.329579   64516 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 17:56:47.334641   64516 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 17:56:47.334678   64516 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 17:56:47.334731   64516 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 17:56:47.334797   64516 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 17:56:47.334912   64516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 17:56:47.347436   64516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 17:56:47.375863   64516 start.go:296] duration metric: took 141.29282ms for postStartSetup
	I0416 17:56:47.375920   64516 main.go:141] libmachine: (flannel-726705) Calling .GetConfigRaw
	I0416 17:56:47.376540   64516 main.go:141] libmachine: (flannel-726705) Calling .GetIP
	I0416 17:56:47.379307   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.379664   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:47.379691   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.379924   64516 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/config.json ...
	I0416 17:56:47.380150   64516 start.go:128] duration metric: took 24.458473234s to createHost
	I0416 17:56:47.380178   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:47.382711   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.382999   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:47.383036   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.383207   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:47.383416   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:47.383575   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:47.383753   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:47.383901   64516 main.go:141] libmachine: Using SSH client type: native
	I0416 17:56:47.384115   64516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0416 17:56:47.384127   64516 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 17:56:47.502815   64516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290207.481806547
	
	I0416 17:56:47.502837   64516 fix.go:216] guest clock: 1713290207.481806547
	I0416 17:56:47.502843   64516 fix.go:229] Guest: 2024-04-16 17:56:47.481806547 +0000 UTC Remote: 2024-04-16 17:56:47.380163259 +0000 UTC m=+24.580721830 (delta=101.643288ms)
	I0416 17:56:47.502861   64516 fix.go:200] guest clock delta is within tolerance: 101.643288ms
	I0416 17:56:47.502865   64516 start.go:83] releasing machines lock for "flannel-726705", held for 24.5812927s
	I0416 17:56:47.502882   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:47.503139   64516 main.go:141] libmachine: (flannel-726705) Calling .GetIP
	I0416 17:56:47.505834   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.506183   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:47.506217   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.506339   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:47.506844   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:47.507007   64516 main.go:141] libmachine: (flannel-726705) Calling .DriverName
	I0416 17:56:47.507090   64516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 17:56:47.507132   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:47.507240   64516 ssh_runner.go:195] Run: cat /version.json
	I0416 17:56:47.507265   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHHostname
	I0416 17:56:47.510077   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.510259   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.510499   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:47.510534   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.510654   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:47.510681   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:47.510777   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:47.510872   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHPort
	I0416 17:56:47.510964   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:47.511009   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHKeyPath
	I0416 17:56:47.511110   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:47.511202   64516 main.go:141] libmachine: (flannel-726705) Calling .GetSSHUsername
	I0416 17:56:47.511273   64516 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa Username:docker}
	I0416 17:56:47.511318   64516 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/flannel-726705/id_rsa Username:docker}
	I0416 17:56:47.629133   64516 ssh_runner.go:195] Run: systemctl --version
	I0416 17:56:47.636265   64516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 17:56:47.803337   64516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 17:56:47.810242   64516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 17:56:47.810326   64516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 17:56:47.828275   64516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 17:56:47.828300   64516 start.go:494] detecting cgroup driver to use...
	I0416 17:56:47.828374   64516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 17:56:47.848736   64516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 17:56:47.868165   64516 docker.go:217] disabling cri-docker service (if available) ...
	I0416 17:56:47.868218   64516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 17:56:47.885261   64516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 17:56:47.904142   64516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 17:56:48.062150   64516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 17:56:48.220938   64516 docker.go:233] disabling docker service ...
	I0416 17:56:48.221010   64516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 17:56:48.241286   64516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 17:56:48.259107   64516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 17:56:48.418892   64516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 17:56:48.564213   64516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 17:56:48.580610   64516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 17:56:48.602328   64516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 17:56:48.602422   64516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:56:48.614540   64516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 17:56:48.614634   64516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:56:48.626665   64516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:56:48.637893   64516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:56:48.649829   64516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 17:56:48.661492   64516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:56:48.672810   64516 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:56:48.694496   64516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 17:56:48.706760   64516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 17:56:48.717552   64516 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 17:56:48.717632   64516 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 17:56:48.732358   64516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 17:56:48.743338   64516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:56:48.881652   64516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 17:56:49.035745   64516 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 17:56:49.035834   64516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 17:56:49.041741   64516 start.go:562] Will wait 60s for crictl version
	I0416 17:56:49.041816   64516 ssh_runner.go:195] Run: which crictl
	I0416 17:56:49.045988   64516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 17:56:49.085783   64516 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 17:56:49.085862   64516 ssh_runner.go:195] Run: crio --version
	I0416 17:56:49.117936   64516 ssh_runner.go:195] Run: crio --version
	I0416 17:56:49.154075   64516 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 17:56:49.155325   64516 main.go:141] libmachine: (flannel-726705) Calling .GetIP
	I0416 17:56:49.158177   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:49.158511   64516 main.go:141] libmachine: (flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:ef:4b", ip: ""} in network mk-flannel-726705: {Iface:virbr2 ExpiryTime:2024-04-16 18:56:39 +0000 UTC Type:0 Mac:52:54:00:54:ef:4b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:flannel-726705 Clientid:01:52:54:00:54:ef:4b}
	I0416 17:56:49.158538   64516 main.go:141] libmachine: (flannel-726705) DBG | domain flannel-726705 has defined IP address 192.168.50.192 and MAC address 52:54:00:54:ef:4b in network mk-flannel-726705
	I0416 17:56:49.158779   64516 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 17:56:49.163443   64516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 17:56:49.178382   64516 kubeadm.go:877] updating cluster {Name:flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 17:56:49.178493   64516 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:56:49.178539   64516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 17:56:49.222297   64516 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 17:56:49.222367   64516 ssh_runner.go:195] Run: which lz4
	I0416 17:56:49.227739   64516 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 17:56:49.233154   64516 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 17:56:49.233183   64516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 17:56:50.944648   64516 crio.go:462] duration metric: took 1.716953352s to copy over tarball
	I0416 17:56:50.944708   64516 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.865965949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e3ae681-c641-4596-b1c0-5399f381f55f name=/runtime.v1.RuntimeService/Version
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.867156456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19de7408-6734-4c50-bf77-5c22f0ec6862 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.867786030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290215867762659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19de7408-6734-4c50-bf77-5c22f0ec6862 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.868378400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0241d323-a912-434c-8be5-3a18aeab0fd2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.868486416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0241d323-a912-434c-8be5-3a18aeab0fd2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.868690053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0241d323-a912-434c-8be5-3a18aeab0fd2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.901685404Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=99824f7e-9ff8-4b21-b7d8-5eb646ac7460 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.901978955Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-69lpx,Uid:b3b140b9-fe8c-4289-94d3-df5f8ee50485,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289059994756010,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T17:37:32.086817812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&PodSandboxMetadata{Name:busybox,Uid:6fd4562b-26b6-4741-b9cd-d8c0939509ba,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1713289059889886156,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T17:37:32.086822139Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9dcaa55a1790d662c5e687270a181c3da34dfde32ced0cd1a354a93ec359730b,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-tt8vp,Uid:6c42b82b-7ff1-4f18-a387-a2c7b06adb63,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289058192554791,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-tt8vp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c42b82b-7ff1-4f18-a387-a2c7b06adb63,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T17:37:32.0
86813718Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289052411191896,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-16T17:37:32.086821136Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&PodSandboxMetadata{Name:kube-proxy-jtn9f,Uid:b64c6a20-cc25-4ea9-9c41-8dac9f537332,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289052410048893,Labels:map[string]string{controller-revision-hash: 79848686cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c41-8dac9f537332,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-04-16T17:37:32.086806739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-368813,Uid:6f28c1ee3c3a1d7662214f724572701e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289047643391888,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.33:2379,kubernetes.io/config.hash: 6f28c1ee3c3a1d7662214f724572701e,kubernetes.io/config.seen: 2024-04-16T17:37:27.195854122Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-368813,U
id:c2484f20a4929050fcce28bc582bd0eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289047634403784,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c2484f20a4929050fcce28bc582bd0eb,kubernetes.io/config.seen: 2024-04-16T17:37:27.085164535Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-368813,Uid:0d9ecc68dadaf8b9845b6219cefbe6a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289047630026898,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0d9ecc68dadaf8b9845b6219cefbe6a0,kubernetes.io/config.seen: 2024-04-16T17:37:27.085163550Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-368813,Uid:89f629b76782333e87b4014cba31dc00,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713289047606007954,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.33:8443,kubernetes.io/config.hash: 89f629b76782333e87b4014cba31dc00,kube
rnetes.io/config.seen: 2024-04-16T17:37:27.085159748Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=99824f7e-9ff8-4b21-b7d8-5eb646ac7460 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.903088777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff394f06-0869-4b2c-8ff7-5917627c3930 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.903203983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff394f06-0869-4b2c-8ff7-5917627c3930 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.903404987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff394f06-0869-4b2c-8ff7-5917627c3930 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.917802780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63e12877-6bb2-4c46-9fcf-95c5eff873db name=/runtime.v1.RuntimeService/Version
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.917889849Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63e12877-6bb2-4c46-9fcf-95c5eff873db name=/runtime.v1.RuntimeService/Version
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.919901459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4da5d91-1278-4c33-a964-fe73473c0678 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.920243430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290215920218422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4da5d91-1278-4c33-a964-fe73473c0678 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.921550639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc8ad846-c1bb-4cce-8245-5ae172ffcb32 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.921612991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc8ad846-c1bb-4cce-8245-5ae172ffcb32 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.921817653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc8ad846-c1bb-4cce-8245-5ae172ffcb32 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.963987201Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cef0ca1-39dd-4e9c-8da6-445b5bf3f1f9 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.964382367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cef0ca1-39dd-4e9c-8da6-445b5bf3f1f9 name=/runtime.v1.RuntimeService/Version
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.965606732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2531192e-d935-437f-8bcf-c24c492898be name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.966023167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290215966001368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2531192e-d935-437f-8bcf-c24c492898be name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.966660058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7544fbf8-9cf6-4cff-8846-84255b1b9c6b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.966753465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7544fbf8-9cf6-4cff-8846-84255b1b9c6b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 17:56:55 no-preload-368813 crio[717]: time="2024-04-16 17:56:55.966996474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713289083404032996,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0697f82a83a7195d9b0dc02622a594abe278b3c71b38b1df4669cc60b4fd2186,PodSandboxId:953136307c659ba055a30ae19a33d71fa741bd510c115852754afb2acd91eac1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1713289061150783184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fd4562b-26b6-4741-b9cd-d8c0939509ba,},Annotations:map[string]string{io.kubernetes.container.hash: 11e8af95,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183,PodSandboxId:1b66ad477e63bc9f9bc50446984e7f7b7bf2c85b313c484e308c12b8d5df67f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713289060267585293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-69lpx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b140b9-fe8c-4289-94d3-df5f8ee50485,},Annotations:map[string]string{io.kubernetes.container.hash: 9f89dad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350,PodSandboxId:86f1bed24d28a0a1d8771c88cad171b6d0ce8a7bb6a87393c663f195bf3e3134,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:1713289052674306325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtn9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64c6a20-cc25-4ea9-9c4
1-8dac9f537332,},Annotations:map[string]string{io.kubernetes.container.hash: 5ea233a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be,PodSandboxId:4fbf28d41144ae058bfa0dec8f06e47a2f443e02f55f1709c5122e191aac5cde,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713289052546999273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6a06521-965b-4aa6-b3ed-1cd9bcc46dc
5,},Annotations:map[string]string{io.kubernetes.container.hash: 2e2a5a7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68,PodSandboxId:c5113c1c80b510be6161a805b34601635c04607090e52d1c58409e2bd69f2d2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713289047994271559,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2484f20a4929050fcce28bc582bd0eb,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6,PodSandboxId:5ea69adaa5bd215da9648829b024403186f8106f66fbee725a1ae9d58572d4c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713289047918008933,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f28c1ee3c3a1d7662214f724572701e,},Annotations:map[string]string{io.kubernetes.containe
r.hash: c15aaa2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e,PodSandboxId:da5a81607a13356f69afbe613e07461b32c907964eae46d32dda94443d4a0e41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713289047872761138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9ecc68dadaf8b9845b6219cefbe6a0,},Annotations:map[string]string{io.kuber
netes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7,PodSandboxId:8e575ab599eaed785f33d2dfb9cd6d91909fe9f48c3cd63025e433b6173894ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713289047748692641,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-368813,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89f629b76782333e87b4014cba31dc00,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 1016c0f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7544fbf8-9cf6-4cff-8846-84255b1b9c6b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97a767c06231c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   4fbf28d41144a       storage-provisioner
	0697f82a83a71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   953136307c659       busybox
	00b1b1a135014       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   1b66ad477e63b       coredns-7db6d8ff4d-69lpx
	b20cd13eb5547       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e                                      19 minutes ago      Running             kube-proxy                1                   86f1bed24d28a       kube-proxy-jtn9f
	4f65b3614ace8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   4fbf28d41144a       storage-provisioner
	11bd4705b165f       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6                                      19 minutes ago      Running             kube-scheduler            1                   c5113c1c80b51       kube-scheduler-no-preload-368813
	f9d5271a91234       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   5ea69adaa5bd2       etcd-no-preload-368813
	936600d85bc99       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b                                      19 minutes ago      Running             kube-controller-manager   1                   da5a81607a133       kube-controller-manager-no-preload-368813
	5157fe646abc0       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1                                      19 minutes ago      Running             kube-apiserver            1                   8e575ab599eae       kube-apiserver-no-preload-368813
	
	
	==> coredns [00b1b1a135014348fe20cc2de7cac44aed5336131fe6ad200decc4c0045c9183] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44194 - 49085 "HINFO IN 400906379160287812.9137361871461743001. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.008848714s
	
	
	==> describe nodes <==
	Name:               no-preload-368813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-368813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=no-preload-368813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_28_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:28:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-368813
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 17:56:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 17:53:21 +0000   Tue, 16 Apr 2024 17:28:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 17:53:21 +0000   Tue, 16 Apr 2024 17:28:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 17:53:21 +0000   Tue, 16 Apr 2024 17:28:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 17:53:21 +0000   Tue, 16 Apr 2024 17:37:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.33
	  Hostname:    no-preload-368813
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09543460441246e1b6aaf1f1552fa561
	  System UUID:                09543460-4412-46e1-b6aa-f1f1552fa561
	  Boot ID:                    9f113a53-e370-4d44-935e-83eedd02b0ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-69lpx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-368813                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-368813             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-368813    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-jtn9f                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-368813             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-tt8vp              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-368813 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-368813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-368813 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-368813 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-368813 event: Registered Node no-preload-368813 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-368813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-368813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-368813 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-368813 event: Registered Node no-preload-368813 in Controller
	
	
	==> dmesg <==
	[Apr16 17:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053473] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.983276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.605853] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr16 17:37] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.985702] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.064617] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084322] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.162468] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.164585] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.343845] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[ +17.193270] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.069977] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.016265] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +4.073693] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.605516] systemd-fstab-generator[1962]: Ignoring "noauto" option for root device
	[  +3.637313] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.821043] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [f9d5271a912340a441e5688254ee5c083702d61d72ac92618f9b35499610cee6] <==
	{"level":"info","ts":"2024-04-16T17:41:10.445867Z","caller":"traceutil/trace.go:171","msg":"trace[1670208070] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:741; }","duration":"122.091332ms","start":"2024-04-16T17:41:10.323746Z","end":"2024-04-16T17:41:10.445837Z","steps":["trace[1670208070] 'range keys from in-memory index tree'  (duration: 121.832847ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:41:10.446078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.45527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-tt8vp\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-16T17:41:10.44615Z","caller":"traceutil/trace.go:171","msg":"trace[386621786] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-tt8vp; range_end:; response_count:1; response_revision:741; }","duration":"134.56331ms","start":"2024-04-16T17:41:10.311575Z","end":"2024-04-16T17:41:10.446138Z","steps":["trace[386621786] 'range keys from in-memory index tree'  (duration: 134.362723ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:43:45.24379Z","caller":"traceutil/trace.go:171","msg":"trace[1131274137] transaction","detail":"{read_only:false; response_revision:868; number_of_response:1; }","duration":"129.350495ms","start":"2024-04-16T17:43:45.114386Z","end":"2024-04-16T17:43:45.243737Z","steps":["trace[1131274137] 'process raft request'  (duration: 128.959428ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:43:45.429638Z","caller":"traceutil/trace.go:171","msg":"trace[1808564091] linearizableReadLoop","detail":"{readStateIndex:977; appliedIndex:976; }","duration":"102.383922ms","start":"2024-04-16T17:43:45.327228Z","end":"2024-04-16T17:43:45.429612Z","steps":["trace[1808564091] 'read index received'  (duration: 40.979184ms)","trace[1808564091] 'applied index is now lower than readState.Index'  (duration: 61.404251ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:43:45.429895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.550026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:43:45.429984Z","caller":"traceutil/trace.go:171","msg":"trace[2052462729] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"176.160402ms","start":"2024-04-16T17:43:45.25381Z","end":"2024-04-16T17:43:45.42997Z","steps":["trace[2052462729] 'process raft request'  (duration: 114.484617ms)","trace[2052462729] 'compare'  (duration: 61.202434ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:43:45.429998Z","caller":"traceutil/trace.go:171","msg":"trace[604975077] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:869; }","duration":"102.779314ms","start":"2024-04-16T17:43:45.327202Z","end":"2024-04-16T17:43:45.429981Z","steps":["trace[604975077] 'agreement among raft nodes before linearized reading'  (duration: 102.5469ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:43:45.693498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.090257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:43:45.693866Z","caller":"traceutil/trace.go:171","msg":"trace[1017121556] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:869; }","duration":"142.543963ms","start":"2024-04-16T17:43:45.551286Z","end":"2024-04-16T17:43:45.69383Z","steps":["trace[1017121556] 'range keys from in-memory index tree'  (duration: 142.038365ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:43:47.504002Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.09319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:43:47.504095Z","caller":"traceutil/trace.go:171","msg":"trace[463873739] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:870; }","duration":"180.217033ms","start":"2024-04-16T17:43:47.323861Z","end":"2024-04-16T17:43:47.504078Z","steps":["trace[463873739] 'range keys from in-memory index tree'  (duration: 180.041359ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:47:30.196979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":805}
	{"level":"info","ts":"2024-04-16T17:47:30.208309Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":805,"took":"10.324697ms","hash":3236236763,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-04-16T17:47:30.208401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3236236763,"revision":805,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T17:52:10.275408Z","caller":"traceutil/trace.go:171","msg":"trace[1392685381] transaction","detail":"{read_only:false; response_revision:1275; number_of_response:1; }","duration":"252.729853ms","start":"2024-04-16T17:52:10.022626Z","end":"2024-04-16T17:52:10.275356Z","steps":["trace[1392685381] 'process raft request'  (duration: 252.442062ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:52:10.526878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.922862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:52:10.52697Z","caller":"traceutil/trace.go:171","msg":"trace[330221084] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1275; }","duration":"200.121004ms","start":"2024-04-16T17:52:10.326829Z","end":"2024-04-16T17:52:10.52695Z","steps":["trace[330221084] 'range keys from in-memory index tree'  (duration: 199.876884ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:52:30.205285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1047}
	{"level":"info","ts":"2024-04-16T17:52:30.209914Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1047,"took":"3.815865ms","hash":2762445166,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-04-16T17:52:30.210008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2762445166,"revision":1047,"compact-revision":805}
	{"level":"warn","ts":"2024-04-16T17:54:38.211227Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.144371ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6335595902823191748 > lease_revoke:<id:57ec8ee7fc325478>","response":"size:29"}
	{"level":"info","ts":"2024-04-16T17:55:27.626072Z","caller":"traceutil/trace.go:171","msg":"trace[235929487] transaction","detail":"{read_only:false; response_revision:1436; number_of_response:1; }","duration":"117.253732ms","start":"2024-04-16T17:55:27.508739Z","end":"2024-04-16T17:55:27.625993Z","steps":["trace[235929487] 'process raft request'  (duration: 116.922739ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:56:54.536873Z","caller":"traceutil/trace.go:171","msg":"trace[275574631] transaction","detail":"{read_only:false; response_revision:1505; number_of_response:1; }","duration":"107.926387ms","start":"2024-04-16T17:56:54.428885Z","end":"2024-04-16T17:56:54.536812Z","steps":["trace[275574631] 'process raft request'  (duration: 107.765957ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:56:54.700812Z","caller":"traceutil/trace.go:171","msg":"trace[2097884248] transaction","detail":"{read_only:false; response_revision:1506; number_of_response:1; }","duration":"112.998728ms","start":"2024-04-16T17:56:54.587783Z","end":"2024-04-16T17:56:54.700782Z","steps":["trace[2097884248] 'process raft request'  (duration: 86.193969ms)","trace[2097884248] 'compare'  (duration: 26.681924ms)"],"step_count":2}
	
	
	==> kernel <==
	 17:56:56 up 20 min,  0 users,  load average: 0.19, 0.12, 0.10
	Linux no-preload-368813 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5157fe646abc0ad476e572ef70c9bb40712762b7feedc2059aa2831fa6af6cc7] <==
	I0416 17:50:32.542669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:52:31.543697       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:52:31.543956       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 17:52:32.544634       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:52:32.544830       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:52:32.544870       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:52:32.544756       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:52:32.544939       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:52:32.545911       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:53:32.545793       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:53:32.545900       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:53:32.545919       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:53:32.546092       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:53:32.546142       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:53:32.547918       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:55:32.546508       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:55:32.546873       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 17:55:32.546904       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 17:55:32.548347       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 17:55:32.548545       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 17:55:32.548606       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [936600d85bc9979bed9d1c59c371bfcfe5be55777b0f015c57b77096fd329e6e] <==
	I0416 17:51:14.704106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:51:44.189588       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:51:44.713304       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:52:14.196394       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:52:14.723045       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:52:44.202545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:52:44.731709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:53:14.208136       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:53:14.742326       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 17:53:38.207816       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="564.606µs"
	E0416 17:53:44.214014       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:53:44.751623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 17:53:52.203125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="127.779µs"
	E0416 17:54:14.223826       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:54:14.761277       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:54:44.228923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:54:44.773166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:55:14.234482       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:55:14.782755       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:55:44.241170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:55:44.791809       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:56:14.247026       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:56:14.800875       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 17:56:44.253381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 17:56:44.810280       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b20cd13eb5547926aaec71becc614f997569630ad6a952cc4bb8a46ae14e3350] <==
	I0416 17:37:32.949040       1 server_linux.go:69] "Using iptables proxy"
	I0416 17:37:32.962648       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.33"]
	I0416 17:37:33.074609       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0416 17:37:33.074686       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:37:33.074704       1 server_linux.go:165] "Using iptables Proxier"
	I0416 17:37:33.080581       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:37:33.080982       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0416 17:37:33.081187       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:37:33.082192       1 config.go:192] "Starting service config controller"
	I0416 17:37:33.082316       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0416 17:37:33.082364       1 config.go:101] "Starting endpoint slice config controller"
	I0416 17:37:33.082382       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0416 17:37:33.082855       1 config.go:319] "Starting node config controller"
	I0416 17:37:33.083692       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0416 17:37:33.183668       1 shared_informer.go:320] Caches are synced for service config
	I0416 17:37:33.186075       1 shared_informer.go:320] Caches are synced for node config
	I0416 17:37:33.187702       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11bd4705b165f7520ddc162bc2e7bd5ed800f47fa3951ed038bb4e83de6e1b68] <==
	I0416 17:37:31.496079       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 17:37:31.496326       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 17:37:31.498605       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 17:37:31.496423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0416 17:37:31.513684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:37:31.513740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:37:31.513832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 17:37:31.513873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 17:37:31.513928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:37:31.513937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:37:31.513967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:37:31.513974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:37:31.524819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:37:31.524873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:37:31.528873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:37:31.528929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:37:31.528992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 17:37:31.529031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 17:37:31.529183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0416 17:37:31.529195       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0416 17:37:31.529223       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 17:37:31.529260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 17:37:31.529337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 17:37:31.529346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0416 17:37:31.599339       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 17:54:27 no-preload-368813 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:54:27 no-preload-368813 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:54:27 no-preload-368813 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:54:32 no-preload-368813 kubelet[1359]: E0416 17:54:32.185599    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:54:43 no-preload-368813 kubelet[1359]: E0416 17:54:43.184962    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:54:57 no-preload-368813 kubelet[1359]: E0416 17:54:57.190396    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:55:08 no-preload-368813 kubelet[1359]: E0416 17:55:08.184527    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:55:22 no-preload-368813 kubelet[1359]: E0416 17:55:22.184967    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:55:27 no-preload-368813 kubelet[1359]: E0416 17:55:27.215135    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 17:55:27 no-preload-368813 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:55:27 no-preload-368813 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:55:27 no-preload-368813 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:55:27 no-preload-368813 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:55:33 no-preload-368813 kubelet[1359]: E0416 17:55:33.184830    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:55:44 no-preload-368813 kubelet[1359]: E0416 17:55:44.184419    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:55:59 no-preload-368813 kubelet[1359]: E0416 17:55:59.185654    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:56:14 no-preload-368813 kubelet[1359]: E0416 17:56:14.185998    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:56:26 no-preload-368813 kubelet[1359]: E0416 17:56:26.185137    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:56:27 no-preload-368813 kubelet[1359]: E0416 17:56:27.213044    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 17:56:27 no-preload-368813 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 17:56:27 no-preload-368813 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 17:56:27 no-preload-368813 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 17:56:27 no-preload-368813 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 17:56:41 no-preload-368813 kubelet[1359]: E0416 17:56:41.184581    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	Apr 16 17:56:54 no-preload-368813 kubelet[1359]: E0416 17:56:54.186114    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-tt8vp" podUID="6c42b82b-7ff1-4f18-a387-a2c7b06adb63"
	
	
	==> storage-provisioner [4f65b3614ace8e5f6079b4d7332044b805db18ac580fc0d8636e28db1b8303be] <==
	I0416 17:37:32.740375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0416 17:38:02.745240       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [97a767c06231c2a787a772f451228cb5a609ab6f3dc1def57bee15de8b3eab00] <==
	I0416 17:38:03.505820       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 17:38:03.516196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 17:38:03.516317       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 17:38:20.920325       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 17:38:20.921030       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4520299d-bd38-406b-a78e-d4bd85587366", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-368813_5eb124a9-7fef-465b-b148-bd6050ca785a became leader
	I0416 17:38:20.921297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-368813_5eb124a9-7fef-465b-b148-bd6050ca785a!
	I0416 17:38:21.022104       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-368813_5eb124a9-7fef-465b-b148-bd6050ca785a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368813 -n no-preload-368813
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-368813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-tt8vp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-368813 describe pod metrics-server-569cc877fc-tt8vp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-368813 describe pod metrics-server-569cc877fc-tt8vp: exit status 1 (67.366705ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-tt8vp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-368813 describe pod metrics-server-569cc877fc-tt8vp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (356.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-16 18:06:36.545278518 +0000 UTC m=+6431.943955042
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-304316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-304316 logs -n 25: (1.381803433s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | systemctl status kubelet --all                       |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat docker                            |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | docker system info                                   |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo cat                    | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo cat                    | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | cri-dockerd --version                                |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | systemctl status containerd                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo cat                    | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | containerd config dump                               |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | systemctl status crio --all                          |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |                |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | crio config                                          |                       |         |                |                     |                     |
	| delete  | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:59:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:59:46.737039   70853 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:59:46.737277   70853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:59:46.737288   70853 out.go:304] Setting ErrFile to fd 2...
	I0416 17:59:46.737292   70853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:59:46.737445   70853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:59:46.738058   70853 out.go:298] Setting JSON to false
	I0416 17:59:46.739271   70853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6139,"bootTime":1713284248,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:59:46.739333   70853 start.go:139] virtualization: kvm guest
	I0416 17:59:46.741592   70853 out.go:177] * [custom-flannel-726705] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:59:46.743739   70853 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:59:46.743738   70853 notify.go:220] Checking for updates...
	I0416 17:59:46.745257   70853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:59:46.746786   70853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:59:46.748414   70853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:59:46.749785   70853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:59:46.751168   70853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:59:41.794997   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:42.295463   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:42.795335   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:43.295116   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:43.794569   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:44.295426   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:44.794957   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:45.294982   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:45.795569   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:46.295540   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:46.752805   70853 config.go:182] Loaded profile config "calico-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:46.752943   70853 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:46.753084   70853 config.go:182] Loaded profile config "kindnet-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:46.753210   70853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:59:46.795439   70853 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:59:46.796890   70853 start.go:297] selected driver: kvm2
	I0416 17:59:46.796910   70853 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:59:46.796924   70853 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:59:46.797806   70853 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:59:46.797940   70853 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:59:46.813722   70853 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:59:46.813801   70853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:59:46.814093   70853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:59:46.814179   70853 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0416 17:59:46.814202   70853 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0416 17:59:46.814276   70853 start.go:340] cluster config:
	{Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:59:46.814482   70853 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:59:46.816050   70853 out.go:177] * Starting "custom-flannel-726705" primary control-plane node in "custom-flannel-726705" cluster
	I0416 17:59:46.817460   70853 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:59:46.817509   70853 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:59:46.817522   70853 cache.go:56] Caching tarball of preloaded images
	I0416 17:59:46.817628   70853 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:59:46.817642   70853 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:59:46.817770   70853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/config.json ...
	I0416 17:59:46.817797   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/config.json: {Name:mkbfcac95f14b1a42efb03c410f579e5b433a3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:59:46.817980   70853 start.go:360] acquireMachinesLock for custom-flannel-726705: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:59:46.818032   70853 start.go:364] duration metric: took 27.232µs to acquireMachinesLock for "custom-flannel-726705"
	I0416 17:59:46.818093   70853 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:59:46.818189   70853 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 17:59:46.794588   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:47.294648   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:47.794646   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:48.294580   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:48.795088   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:49.294580   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:49.444925   68924 kubeadm.go:1107] duration metric: took 11.352689424s to wait for elevateKubeSystemPrivileges
	W0416 17:59:49.444957   68924 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:59:49.444967   68924 kubeadm.go:393] duration metric: took 24.231281676s to StartCluster
	I0416 17:59:49.445018   68924 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:59:49.445092   68924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:59:49.446709   68924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:59:49.446908   68924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 17:59:49.446918   68924 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:59:49.448948   68924 out.go:177] * Verifying Kubernetes components...
	I0416 17:59:49.447005   68924 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:59:49.447104   68924 config.go:182] Loaded profile config "kindnet-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:49.450307   68924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:59:49.448995   68924 addons.go:69] Setting storage-provisioner=true in profile "kindnet-726705"
	I0416 17:59:49.450365   68924 addons.go:234] Setting addon storage-provisioner=true in "kindnet-726705"
	I0416 17:59:49.450400   68924 host.go:66] Checking if "kindnet-726705" exists ...
	I0416 17:59:49.449011   68924 addons.go:69] Setting default-storageclass=true in profile "kindnet-726705"
	I0416 17:59:49.450440   68924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-726705"
	I0416 17:59:49.450857   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.450891   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.450894   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.450905   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.468111   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0416 17:59:49.468734   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.469344   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.469375   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.469768   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.470351   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.470394   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.471942   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0416 17:59:49.472298   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.472773   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.472794   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.473173   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.473410   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetState
	I0416 17:59:49.477012   68924 addons.go:234] Setting addon default-storageclass=true in "kindnet-726705"
	I0416 17:59:49.477051   68924 host.go:66] Checking if "kindnet-726705" exists ...
	I0416 17:59:49.477360   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.477399   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.497819   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0416 17:59:49.498275   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.499872   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I0416 17:59:49.500355   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.500562   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.500575   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.500887   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.500901   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.501078   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.501217   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetState
	I0416 17:59:49.502197   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.502870   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.502909   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.503323   68924 main.go:141] libmachine: (kindnet-726705) Calling .DriverName
	I0416 17:59:49.508052   68924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:59:49.509525   68924 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:59:49.509538   68924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:59:49.509552   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHHostname
	I0416 17:59:49.512823   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.513313   68924 main.go:141] libmachine: (kindnet-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:aa:d0", ip: ""} in network mk-kindnet-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:59:06 +0000 UTC Type:0 Mac:52:54:00:13:aa:d0 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:kindnet-726705 Clientid:01:52:54:00:13:aa:d0}
	I0416 17:59:49.513329   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined IP address 192.168.61.229 and MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.513477   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHPort
	I0416 17:59:49.513617   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHKeyPath
	I0416 17:59:49.513768   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHUsername
	I0416 17:59:49.513897   68924 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kindnet-726705/id_rsa Username:docker}
	I0416 17:59:49.524922   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0416 17:59:49.526939   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.527464   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.527507   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.527907   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.528075   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetState
	I0416 17:59:49.529952   68924 main.go:141] libmachine: (kindnet-726705) Calling .DriverName
	I0416 17:59:49.530222   68924 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:59:49.530235   68924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:59:49.530247   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHHostname
	I0416 17:59:49.533336   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.533827   68924 main.go:141] libmachine: (kindnet-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:aa:d0", ip: ""} in network mk-kindnet-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:59:06 +0000 UTC Type:0 Mac:52:54:00:13:aa:d0 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:kindnet-726705 Clientid:01:52:54:00:13:aa:d0}
	I0416 17:59:49.533855   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined IP address 192.168.61.229 and MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.534038   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHPort
	I0416 17:59:49.539432   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHKeyPath
	I0416 17:59:49.539652   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHUsername
	I0416 17:59:49.539788   68924 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kindnet-726705/id_rsa Username:docker}
	I0416 17:59:49.693551   68924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 17:59:49.745621   68924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:59:49.890557   68924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:59:49.946357   68924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:59:50.441284   68924 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0416 17:59:50.442672   68924 node_ready.go:35] waiting up to 15m0s for node "kindnet-726705" to be "Ready" ...
	I0416 17:59:50.869723   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.869748   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.869835   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.869859   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.870259   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.870277   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.870286   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.870292   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.870321   68924 main.go:141] libmachine: (kindnet-726705) DBG | Closing plugin on server side
	I0416 17:59:50.870374   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.870400   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.870417   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.870424   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.870495   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.870509   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.871922   68924 main.go:141] libmachine: (kindnet-726705) DBG | Closing plugin on server side
	I0416 17:59:50.872341   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.872358   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.885720   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.885742   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.886044   68924 main.go:141] libmachine: (kindnet-726705) DBG | Closing plugin on server side
	I0416 17:59:50.886100   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.886116   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.887729   68924 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 17:59:47.846690   67680 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-b4whw" in "kube-system" namespace has status "Ready":"False"
	I0416 17:59:49.347410   67680 pod_ready.go:92] pod "calico-kube-controllers-787f445f84-b4whw" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:49.347433   67680 pod_ready.go:81] duration metric: took 17.008193947s for pod "calico-kube-controllers-787f445f84-b4whw" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:49.347443   67680 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-bkzqr" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:51.356403   67680 pod_ready.go:102] pod "calico-node-bkzqr" in "kube-system" namespace has status "Ready":"False"
	I0416 17:59:46.819995   70853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0416 17:59:46.820151   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:46.820183   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:46.835144   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0416 17:59:46.835599   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:46.836197   70853 main.go:141] libmachine: Using API Version  1
	I0416 17:59:46.836221   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:46.836591   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:46.836887   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 17:59:46.837116   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 17:59:46.837290   70853 start.go:159] libmachine.API.Create for "custom-flannel-726705" (driver="kvm2")
	I0416 17:59:46.837336   70853 client.go:168] LocalClient.Create starting
	I0416 17:59:46.837378   70853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 17:59:46.837407   70853 main.go:141] libmachine: Decoding PEM data...
	I0416 17:59:46.837427   70853 main.go:141] libmachine: Parsing certificate...
	I0416 17:59:46.837495   70853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 17:59:46.837523   70853 main.go:141] libmachine: Decoding PEM data...
	I0416 17:59:46.837540   70853 main.go:141] libmachine: Parsing certificate...
	I0416 17:59:46.837564   70853 main.go:141] libmachine: Running pre-create checks...
	I0416 17:59:46.837576   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .PreCreateCheck
	I0416 17:59:46.837968   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetConfigRaw
	I0416 17:59:46.838423   70853 main.go:141] libmachine: Creating machine...
	I0416 17:59:46.838443   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Create
	I0416 17:59:46.838619   70853 main.go:141] libmachine: (custom-flannel-726705) Creating KVM machine...
	I0416 17:59:46.840144   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found existing default KVM network
	I0416 17:59:46.841610   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.841449   70875 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:c7:a6} reservation:<nil>}
	I0416 17:59:46.842808   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.842688   70875 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:69:c5} reservation:<nil>}
	I0416 17:59:46.844258   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.844154   70875 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:7d:cc} reservation:<nil>}
	I0416 17:59:46.845657   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.845576   70875 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002bd950}
	I0416 17:59:46.845695   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | created network xml: 
	I0416 17:59:46.845721   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | <network>
	I0416 17:59:46.845736   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   <name>mk-custom-flannel-726705</name>
	I0416 17:59:46.845746   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   <dns enable='no'/>
	I0416 17:59:46.845756   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   
	I0416 17:59:46.845775   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0416 17:59:46.845804   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |     <dhcp>
	I0416 17:59:46.845833   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0416 17:59:46.845847   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |     </dhcp>
	I0416 17:59:46.845858   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   </ip>
	I0416 17:59:46.845868   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   
	I0416 17:59:46.845878   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | </network>
	I0416 17:59:46.845889   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | 
	I0416 17:59:46.851387   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | trying to create private KVM network mk-custom-flannel-726705 192.168.72.0/24...
	I0416 17:59:46.934476   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | private KVM network mk-custom-flannel-726705 192.168.72.0/24 created
	I0416 17:59:46.934633   70853 main.go:141] libmachine: (custom-flannel-726705) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705 ...
	I0416 17:59:46.934719   70853 main.go:141] libmachine: (custom-flannel-726705) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 17:59:46.944957   70853 main.go:141] libmachine: (custom-flannel-726705) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:59:46.945028   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.934885   70875 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:59:47.192405   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:47.192219   70875 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa...
	I0416 17:59:47.463349   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:47.463160   70875 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/custom-flannel-726705.rawdisk...
	I0416 17:59:47.463392   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Writing magic tar header
	I0416 17:59:47.463446   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Writing SSH key tar header
	I0416 17:59:47.463466   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:47.463300   70875 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705 ...
	I0416 17:59:47.463481   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705 (perms=drwx------)
	I0416 17:59:47.463506   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705
	I0416 17:59:47.463529   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 17:59:47.463564   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:59:47.463591   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 17:59:47.463615   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 17:59:47.463669   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 17:59:47.463693   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 17:59:47.463711   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 17:59:47.463746   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins
	I0416 17:59:47.463777   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home
	I0416 17:59:47.463795   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Skipping /home - not owner
	I0416 17:59:47.463813   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 17:59:47.463860   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 17:59:47.463885   70853 main.go:141] libmachine: (custom-flannel-726705) Creating domain...
	I0416 17:59:47.464917   70853 main.go:141] libmachine: (custom-flannel-726705) define libvirt domain using xml: 
	I0416 17:59:47.464942   70853 main.go:141] libmachine: (custom-flannel-726705) <domain type='kvm'>
	I0416 17:59:47.464953   70853 main.go:141] libmachine: (custom-flannel-726705)   <name>custom-flannel-726705</name>
	I0416 17:59:47.464965   70853 main.go:141] libmachine: (custom-flannel-726705)   <memory unit='MiB'>3072</memory>
	I0416 17:59:47.464975   70853 main.go:141] libmachine: (custom-flannel-726705)   <vcpu>2</vcpu>
	I0416 17:59:47.467214   70853 main.go:141] libmachine: (custom-flannel-726705)   <features>
	I0416 17:59:47.467239   70853 main.go:141] libmachine: (custom-flannel-726705)     <acpi/>
	I0416 17:59:47.467247   70853 main.go:141] libmachine: (custom-flannel-726705)     <apic/>
	I0416 17:59:47.467255   70853 main.go:141] libmachine: (custom-flannel-726705)     <pae/>
	I0416 17:59:47.467263   70853 main.go:141] libmachine: (custom-flannel-726705)     
	I0416 17:59:47.467295   70853 main.go:141] libmachine: (custom-flannel-726705)   </features>
	I0416 17:59:47.467315   70853 main.go:141] libmachine: (custom-flannel-726705)   <cpu mode='host-passthrough'>
	I0416 17:59:47.467326   70853 main.go:141] libmachine: (custom-flannel-726705)   
	I0416 17:59:47.467335   70853 main.go:141] libmachine: (custom-flannel-726705)   </cpu>
	I0416 17:59:47.467349   70853 main.go:141] libmachine: (custom-flannel-726705)   <os>
	I0416 17:59:47.467360   70853 main.go:141] libmachine: (custom-flannel-726705)     <type>hvm</type>
	I0416 17:59:47.467368   70853 main.go:141] libmachine: (custom-flannel-726705)     <boot dev='cdrom'/>
	I0416 17:59:47.467378   70853 main.go:141] libmachine: (custom-flannel-726705)     <boot dev='hd'/>
	I0416 17:59:47.467385   70853 main.go:141] libmachine: (custom-flannel-726705)     <bootmenu enable='no'/>
	I0416 17:59:47.467395   70853 main.go:141] libmachine: (custom-flannel-726705)   </os>
	I0416 17:59:47.467403   70853 main.go:141] libmachine: (custom-flannel-726705)   <devices>
	I0416 17:59:47.467419   70853 main.go:141] libmachine: (custom-flannel-726705)     <disk type='file' device='cdrom'>
	I0416 17:59:47.467460   70853 main.go:141] libmachine: (custom-flannel-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/boot2docker.iso'/>
	I0416 17:59:47.467475   70853 main.go:141] libmachine: (custom-flannel-726705)       <target dev='hdc' bus='scsi'/>
	I0416 17:59:47.467485   70853 main.go:141] libmachine: (custom-flannel-726705)       <readonly/>
	I0416 17:59:47.467493   70853 main.go:141] libmachine: (custom-flannel-726705)     </disk>
	I0416 17:59:47.467503   70853 main.go:141] libmachine: (custom-flannel-726705)     <disk type='file' device='disk'>
	I0416 17:59:47.467513   70853 main.go:141] libmachine: (custom-flannel-726705)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 17:59:47.467528   70853 main.go:141] libmachine: (custom-flannel-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/custom-flannel-726705.rawdisk'/>
	I0416 17:59:47.467537   70853 main.go:141] libmachine: (custom-flannel-726705)       <target dev='hda' bus='virtio'/>
	I0416 17:59:47.467546   70853 main.go:141] libmachine: (custom-flannel-726705)     </disk>
	I0416 17:59:47.467553   70853 main.go:141] libmachine: (custom-flannel-726705)     <interface type='network'>
	I0416 17:59:47.467564   70853 main.go:141] libmachine: (custom-flannel-726705)       <source network='mk-custom-flannel-726705'/>
	I0416 17:59:47.467572   70853 main.go:141] libmachine: (custom-flannel-726705)       <model type='virtio'/>
	I0416 17:59:47.467581   70853 main.go:141] libmachine: (custom-flannel-726705)     </interface>
	I0416 17:59:47.467589   70853 main.go:141] libmachine: (custom-flannel-726705)     <interface type='network'>
	I0416 17:59:47.467599   70853 main.go:141] libmachine: (custom-flannel-726705)       <source network='default'/>
	I0416 17:59:47.467607   70853 main.go:141] libmachine: (custom-flannel-726705)       <model type='virtio'/>
	I0416 17:59:47.467617   70853 main.go:141] libmachine: (custom-flannel-726705)     </interface>
	I0416 17:59:47.467625   70853 main.go:141] libmachine: (custom-flannel-726705)     <serial type='pty'>
	I0416 17:59:47.467635   70853 main.go:141] libmachine: (custom-flannel-726705)       <target port='0'/>
	I0416 17:59:47.467642   70853 main.go:141] libmachine: (custom-flannel-726705)     </serial>
	I0416 17:59:47.467651   70853 main.go:141] libmachine: (custom-flannel-726705)     <console type='pty'>
	I0416 17:59:47.467658   70853 main.go:141] libmachine: (custom-flannel-726705)       <target type='serial' port='0'/>
	I0416 17:59:47.467666   70853 main.go:141] libmachine: (custom-flannel-726705)     </console>
	I0416 17:59:47.467673   70853 main.go:141] libmachine: (custom-flannel-726705)     <rng model='virtio'>
	I0416 17:59:47.467682   70853 main.go:141] libmachine: (custom-flannel-726705)       <backend model='random'>/dev/random</backend>
	I0416 17:59:47.467688   70853 main.go:141] libmachine: (custom-flannel-726705)     </rng>
	I0416 17:59:47.467696   70853 main.go:141] libmachine: (custom-flannel-726705)     
	I0416 17:59:47.467702   70853 main.go:141] libmachine: (custom-flannel-726705)     
	I0416 17:59:47.467710   70853 main.go:141] libmachine: (custom-flannel-726705)   </devices>
	I0416 17:59:47.467716   70853 main.go:141] libmachine: (custom-flannel-726705) </domain>
	I0416 17:59:47.467726   70853 main.go:141] libmachine: (custom-flannel-726705) 
	I0416 17:59:47.469792   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:98:aa:a1 in network default
	I0416 17:59:47.470411   70853 main.go:141] libmachine: (custom-flannel-726705) Ensuring networks are active...
	I0416 17:59:47.470441   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:47.471079   70853 main.go:141] libmachine: (custom-flannel-726705) Ensuring network default is active
	I0416 17:59:47.471444   70853 main.go:141] libmachine: (custom-flannel-726705) Ensuring network mk-custom-flannel-726705 is active
	I0416 17:59:47.472028   70853 main.go:141] libmachine: (custom-flannel-726705) Getting domain xml...
	I0416 17:59:47.472757   70853 main.go:141] libmachine: (custom-flannel-726705) Creating domain...
	I0416 17:59:48.852467   70853 main.go:141] libmachine: (custom-flannel-726705) Waiting to get IP...
	I0416 17:59:48.853431   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:48.853979   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:48.854006   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:48.853945   70875 retry.go:31] will retry after 254.465483ms: waiting for machine to come up
	I0416 17:59:49.110346   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:49.110986   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:49.111005   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:49.110949   70875 retry.go:31] will retry after 371.607637ms: waiting for machine to come up
	I0416 17:59:49.484459   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:49.484996   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:49.485025   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:49.484944   70875 retry.go:31] will retry after 334.420894ms: waiting for machine to come up
	I0416 17:59:49.821584   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:49.822220   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:49.822247   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:49.822160   70875 retry.go:31] will retry after 480.825723ms: waiting for machine to come up
	I0416 17:59:50.305051   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:50.305564   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:50.305587   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:50.305509   70875 retry.go:31] will retry after 741.101971ms: waiting for machine to come up
	I0416 17:59:51.048684   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:51.049279   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:51.049330   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:51.049239   70875 retry.go:31] will retry after 704.311837ms: waiting for machine to come up
	I0416 17:59:50.889121   68924 addons.go:505] duration metric: took 1.442125693s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 17:59:50.947741   68924 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-726705" context rescaled to 1 replicas
	I0416 17:59:52.870161   67680 pod_ready.go:92] pod "calico-node-bkzqr" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.870190   67680 pod_ready.go:81] duration metric: took 3.522739385s for pod "calico-node-bkzqr" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.870203   67680 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-6nc69" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.887485   67680 pod_ready.go:92] pod "coredns-76f75df574-6nc69" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.887524   67680 pod_ready.go:81] duration metric: took 17.312814ms for pod "coredns-76f75df574-6nc69" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.887540   67680 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.895226   67680 pod_ready.go:92] pod "etcd-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.895256   67680 pod_ready.go:81] duration metric: took 7.706894ms for pod "etcd-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.895268   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.908224   67680 pod_ready.go:92] pod "kube-apiserver-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.908248   67680 pod_ready.go:81] duration metric: took 12.971818ms for pod "kube-apiserver-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.908257   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.915225   67680 pod_ready.go:92] pod "kube-controller-manager-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.915255   67680 pod_ready.go:81] duration metric: took 6.989899ms for pod "kube-controller-manager-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.915269   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-sjbpp" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.252474   67680 pod_ready.go:92] pod "kube-proxy-sjbpp" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.252498   67680 pod_ready.go:81] duration metric: took 337.222317ms for pod "kube-proxy-sjbpp" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.252507   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.652692   67680 pod_ready.go:92] pod "kube-scheduler-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.652718   67680 pod_ready.go:81] duration metric: took 400.204909ms for pod "kube-scheduler-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.652729   67680 pod_ready.go:38] duration metric: took 21.325723365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:59:53.652742   67680 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:59:53.652800   67680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:59:53.674956   67680 api_server.go:72] duration metric: took 31.807084154s to wait for apiserver process to appear ...
	I0416 17:59:53.674979   67680 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:59:53.675001   67680 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0416 17:59:53.680630   67680 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0416 17:59:53.682214   67680 api_server.go:141] control plane version: v1.29.3
	I0416 17:59:53.682241   67680 api_server.go:131] duration metric: took 7.254451ms to wait for apiserver health ...
	I0416 17:59:53.682252   67680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:59:53.857251   67680 system_pods.go:59] 9 kube-system pods found
	I0416 17:59:53.857292   67680 system_pods.go:61] "calico-kube-controllers-787f445f84-b4whw" [ad66dbea-5e1b-4ec7-a590-e08123083605] Running
	I0416 17:59:53.857299   67680 system_pods.go:61] "calico-node-bkzqr" [d3f35563-9a63-434f-b6e2-c15aecd262f2] Running
	I0416 17:59:53.857303   67680 system_pods.go:61] "coredns-76f75df574-6nc69" [6a801bf3-76c7-4140-950a-9a24bc2aa7d4] Running
	I0416 17:59:53.857307   67680 system_pods.go:61] "etcd-calico-726705" [34a5958f-e21f-4391-a23e-99bec66ee776] Running
	I0416 17:59:53.857310   67680 system_pods.go:61] "kube-apiserver-calico-726705" [399ed59c-b133-4b4c-9d39-ddf42bfc1bf0] Running
	I0416 17:59:53.857313   67680 system_pods.go:61] "kube-controller-manager-calico-726705" [12d86864-17d9-46dc-90f4-53507f21f96e] Running
	I0416 17:59:53.857315   67680 system_pods.go:61] "kube-proxy-sjbpp" [eb7274d2-473c-4ffb-8867-19ac63f3747b] Running
	I0416 17:59:53.857320   67680 system_pods.go:61] "kube-scheduler-calico-726705" [4ea980a4-1603-41ae-aeab-56fefd3ba6e8] Running
	I0416 17:59:53.857323   67680 system_pods.go:61] "storage-provisioner" [8c0046c2-65f4-4571-ae01-ec0c8de967a9] Running
	I0416 17:59:53.857330   67680 system_pods.go:74] duration metric: took 175.07111ms to wait for pod list to return data ...
	I0416 17:59:53.857339   67680 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:59:54.051685   67680 default_sa.go:45] found service account: "default"
	I0416 17:59:54.051710   67680 default_sa.go:55] duration metric: took 194.363862ms for default service account to be created ...
	I0416 17:59:54.051717   67680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:59:54.257885   67680 system_pods.go:86] 9 kube-system pods found
	I0416 17:59:54.257913   67680 system_pods.go:89] "calico-kube-controllers-787f445f84-b4whw" [ad66dbea-5e1b-4ec7-a590-e08123083605] Running
	I0416 17:59:54.257919   67680 system_pods.go:89] "calico-node-bkzqr" [d3f35563-9a63-434f-b6e2-c15aecd262f2] Running
	I0416 17:59:54.257923   67680 system_pods.go:89] "coredns-76f75df574-6nc69" [6a801bf3-76c7-4140-950a-9a24bc2aa7d4] Running
	I0416 17:59:54.257928   67680 system_pods.go:89] "etcd-calico-726705" [34a5958f-e21f-4391-a23e-99bec66ee776] Running
	I0416 17:59:54.257932   67680 system_pods.go:89] "kube-apiserver-calico-726705" [399ed59c-b133-4b4c-9d39-ddf42bfc1bf0] Running
	I0416 17:59:54.257935   67680 system_pods.go:89] "kube-controller-manager-calico-726705" [12d86864-17d9-46dc-90f4-53507f21f96e] Running
	I0416 17:59:54.257939   67680 system_pods.go:89] "kube-proxy-sjbpp" [eb7274d2-473c-4ffb-8867-19ac63f3747b] Running
	I0416 17:59:54.257943   67680 system_pods.go:89] "kube-scheduler-calico-726705" [4ea980a4-1603-41ae-aeab-56fefd3ba6e8] Running
	I0416 17:59:54.257946   67680 system_pods.go:89] "storage-provisioner" [8c0046c2-65f4-4571-ae01-ec0c8de967a9] Running
	I0416 17:59:54.257952   67680 system_pods.go:126] duration metric: took 206.230185ms to wait for k8s-apps to be running ...
	I0416 17:59:54.257958   67680 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:59:54.257999   67680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:59:54.276316   67680 system_svc.go:56] duration metric: took 18.341061ms WaitForService to wait for kubelet
	I0416 17:59:54.276346   67680 kubeadm.go:576] duration metric: took 32.408476431s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:59:54.276369   67680 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:59:54.452764   67680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:59:54.452793   67680 node_conditions.go:123] node cpu capacity is 2
	I0416 17:59:54.452804   67680 node_conditions.go:105] duration metric: took 176.430861ms to run NodePressure ...
	I0416 17:59:54.452815   67680 start.go:240] waiting for startup goroutines ...
	I0416 17:59:54.452821   67680 start.go:245] waiting for cluster config update ...
	I0416 17:59:54.452830   67680 start.go:254] writing updated cluster config ...
	I0416 17:59:54.453132   67680 ssh_runner.go:195] Run: rm -f paused
	I0416 17:59:54.506191   67680 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 17:59:54.507963   67680 out.go:177] * Done! kubectl is now configured to use "calico-726705" cluster and "default" namespace by default
	I0416 17:59:52.450633   68924 node_ready.go:49] node "kindnet-726705" has status "Ready":"True"
	I0416 17:59:52.450657   68924 node_ready.go:38] duration metric: took 2.007953886s for node "kindnet-726705" to be "Ready" ...
	I0416 17:59:52.450668   68924 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:59:52.460469   68924 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-dlv6g" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.971276   68924 pod_ready.go:92] pod "coredns-76f75df574-dlv6g" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.971308   68924 pod_ready.go:81] duration metric: took 1.510812192s for pod "coredns-76f75df574-dlv6g" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.971321   68924 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.977067   68924 pod_ready.go:92] pod "etcd-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.977090   68924 pod_ready.go:81] duration metric: took 5.760643ms for pod "etcd-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.977104   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.982696   68924 pod_ready.go:92] pod "kube-apiserver-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.982717   68924 pod_ready.go:81] duration metric: took 5.604294ms for pod "kube-apiserver-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.982730   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.989121   68924 pod_ready.go:92] pod "kube-controller-manager-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.989149   68924 pod_ready.go:81] duration metric: took 6.410855ms for pod "kube-controller-manager-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.989158   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-r8xjf" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.046867   68924 pod_ready.go:92] pod "kube-proxy-r8xjf" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:54.046900   68924 pod_ready.go:81] duration metric: took 57.733053ms for pod "kube-proxy-r8xjf" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.046912   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.447711   68924 pod_ready.go:92] pod "kube-scheduler-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:54.447733   68924 pod_ready.go:81] duration metric: took 400.814119ms for pod "kube-scheduler-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.447743   68924 pod_ready.go:38] duration metric: took 1.997061859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:59:54.447756   68924 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:59:54.447797   68924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:59:54.472955   68924 api_server.go:72] duration metric: took 5.026008338s to wait for apiserver process to appear ...
	I0416 17:59:54.472984   68924 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:59:54.473004   68924 api_server.go:253] Checking apiserver healthz at https://192.168.61.229:8443/healthz ...
	I0416 17:59:54.481384   68924 api_server.go:279] https://192.168.61.229:8443/healthz returned 200:
	ok
	I0416 17:59:54.482940   68924 api_server.go:141] control plane version: v1.29.3
	I0416 17:59:54.482961   68924 api_server.go:131] duration metric: took 9.970844ms to wait for apiserver health ...
	I0416 17:59:54.482968   68924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:59:54.652111   68924 system_pods.go:59] 8 kube-system pods found
	I0416 17:59:54.652157   68924 system_pods.go:61] "coredns-76f75df574-dlv6g" [de29708f-8d27-4aab-ab71-b91614e1a3c8] Running
	I0416 17:59:54.652166   68924 system_pods.go:61] "etcd-kindnet-726705" [19c979dd-a889-40d5-b1cf-7a855ede4f69] Running
	I0416 17:59:54.652171   68924 system_pods.go:61] "kindnet-5vb2l" [4799cba0-132a-44b3-9481-193b7258ced4] Running
	I0416 17:59:54.652177   68924 system_pods.go:61] "kube-apiserver-kindnet-726705" [f9310118-5fb2-4c22-b91a-595dd76e263f] Running
	I0416 17:59:54.652181   68924 system_pods.go:61] "kube-controller-manager-kindnet-726705" [4ea65dee-c5bc-49de-9281-53f5b9a7b161] Running
	I0416 17:59:54.652187   68924 system_pods.go:61] "kube-proxy-r8xjf" [4e5f5faf-31ff-4753-beea-6180b2d560c9] Running
	I0416 17:59:54.652191   68924 system_pods.go:61] "kube-scheduler-kindnet-726705" [bacdbf9d-9c91-40f2-80b4-a468d38fed67] Running
	I0416 17:59:54.652195   68924 system_pods.go:61] "storage-provisioner" [8831c5db-6d7d-475d-ab5b-d44ee3eb48b9] Running
	I0416 17:59:54.652208   68924 system_pods.go:74] duration metric: took 169.233773ms to wait for pod list to return data ...
	I0416 17:59:54.652221   68924 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:59:54.846697   68924 default_sa.go:45] found service account: "default"
	I0416 17:59:54.846732   68924 default_sa.go:55] duration metric: took 194.499936ms for default service account to be created ...
	I0416 17:59:54.846746   68924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:59:55.051211   68924 system_pods.go:86] 8 kube-system pods found
	I0416 17:59:55.051243   68924 system_pods.go:89] "coredns-76f75df574-dlv6g" [de29708f-8d27-4aab-ab71-b91614e1a3c8] Running
	I0416 17:59:55.051251   68924 system_pods.go:89] "etcd-kindnet-726705" [19c979dd-a889-40d5-b1cf-7a855ede4f69] Running
	I0416 17:59:55.051258   68924 system_pods.go:89] "kindnet-5vb2l" [4799cba0-132a-44b3-9481-193b7258ced4] Running
	I0416 17:59:55.051264   68924 system_pods.go:89] "kube-apiserver-kindnet-726705" [f9310118-5fb2-4c22-b91a-595dd76e263f] Running
	I0416 17:59:55.051271   68924 system_pods.go:89] "kube-controller-manager-kindnet-726705" [4ea65dee-c5bc-49de-9281-53f5b9a7b161] Running
	I0416 17:59:55.051276   68924 system_pods.go:89] "kube-proxy-r8xjf" [4e5f5faf-31ff-4753-beea-6180b2d560c9] Running
	I0416 17:59:55.051282   68924 system_pods.go:89] "kube-scheduler-kindnet-726705" [bacdbf9d-9c91-40f2-80b4-a468d38fed67] Running
	I0416 17:59:55.051288   68924 system_pods.go:89] "storage-provisioner" [8831c5db-6d7d-475d-ab5b-d44ee3eb48b9] Running
	I0416 17:59:55.051296   68924 system_pods.go:126] duration metric: took 204.543354ms to wait for k8s-apps to be running ...
	I0416 17:59:55.051308   68924 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:59:55.051359   68924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:59:55.078146   68924 system_svc.go:56] duration metric: took 26.827432ms WaitForService to wait for kubelet
	I0416 17:59:55.078182   68924 kubeadm.go:576] duration metric: took 5.631238414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:59:55.078209   68924 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:59:55.247278   68924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:59:55.247320   68924 node_conditions.go:123] node cpu capacity is 2
	I0416 17:59:55.247333   68924 node_conditions.go:105] duration metric: took 169.118479ms to run NodePressure ...
	I0416 17:59:55.247349   68924 start.go:240] waiting for startup goroutines ...
	I0416 17:59:55.247359   68924 start.go:245] waiting for cluster config update ...
	I0416 17:59:55.247373   68924 start.go:254] writing updated cluster config ...
	I0416 17:59:55.247676   68924 ssh_runner.go:195] Run: rm -f paused
	I0416 17:59:55.298161   68924 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 17:59:55.301083   68924 out.go:177] * Done! kubectl is now configured to use "kindnet-726705" cluster and "default" namespace by default
	I0416 17:59:51.755079   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:51.755551   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:51.755577   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:51.755506   70875 retry.go:31] will retry after 1.109917667s: waiting for machine to come up
	I0416 17:59:52.867274   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:52.868007   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:52.868036   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:52.867933   70875 retry.go:31] will retry after 997.019923ms: waiting for machine to come up
	I0416 17:59:53.866951   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:53.867504   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:53.867537   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:53.867473   70875 retry.go:31] will retry after 1.344016763s: waiting for machine to come up
	I0416 17:59:55.212622   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:55.213188   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:55.213225   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:55.213157   70875 retry.go:31] will retry after 1.719289923s: waiting for machine to come up
	I0416 17:59:56.933873   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:56.934383   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:56.934403   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:56.934336   70875 retry.go:31] will retry after 2.10573305s: waiting for machine to come up
	I0416 17:59:59.041129   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:59.041566   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:59.041590   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:59.041546   70875 retry.go:31] will retry after 2.621818883s: waiting for machine to come up
	I0416 18:00:01.666081   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:01.666695   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 18:00:01.666723   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 18:00:01.666642   70875 retry.go:31] will retry after 3.415105578s: waiting for machine to come up
	I0416 18:00:05.083442   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:05.084006   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 18:00:05.084035   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 18:00:05.083957   70875 retry.go:31] will retry after 3.54402725s: waiting for machine to come up
	I0416 18:00:08.630056   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:08.630600   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 18:00:08.630645   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 18:00:08.630549   70875 retry.go:31] will retry after 6.533819056s: waiting for machine to come up
	I0416 18:00:15.165712   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:15.166331   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has current primary IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:15.166363   70853 main.go:141] libmachine: (custom-flannel-726705) Found IP for machine: 192.168.72.208
	I0416 18:00:15.166373   70853 main.go:141] libmachine: (custom-flannel-726705) Reserving static IP address...
	I0416 18:00:15.166663   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find host DHCP lease matching {name: "custom-flannel-726705", mac: "52:54:00:f8:f8:88", ip: "192.168.72.208"} in network mk-custom-flannel-726705
	I0416 18:00:15.242221   70853 main.go:141] libmachine: (custom-flannel-726705) Reserved static IP address: 192.168.72.208
	I0416 18:00:15.242241   70853 main.go:141] libmachine: (custom-flannel-726705) Waiting for SSH to be available...
	I0416 18:00:15.242262   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Getting to WaitForSSH function...
	I0416 18:00:15.244938   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:15.245352   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705
	I0416 18:00:15.245377   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find defined IP address of network mk-custom-flannel-726705 interface with MAC address 52:54:00:f8:f8:88
	I0416 18:00:15.245535   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH client type: external
	I0416 18:00:15.245558   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa (-rw-------)
	I0416 18:00:15.245601   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 18:00:15.245615   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | About to run SSH command:
	I0416 18:00:15.245648   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | exit 0
	I0416 18:00:15.249441   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | SSH cmd err, output: exit status 255: 
	I0416 18:00:15.249454   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0416 18:00:15.249461   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | command : exit 0
	I0416 18:00:15.249469   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | err     : exit status 255
	I0416 18:00:15.249480   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | output  : 
	I0416 18:00:18.250611   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Getting to WaitForSSH function...
	I0416 18:00:18.253154   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.253605   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.253630   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.253712   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH client type: external
	I0416 18:00:18.253739   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa (-rw-------)
	I0416 18:00:18.253780   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 18:00:18.253799   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | About to run SSH command:
	I0416 18:00:18.253813   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | exit 0
	I0416 18:00:18.390225   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | SSH cmd err, output: <nil>: 
	I0416 18:00:18.390421   70853 main.go:141] libmachine: (custom-flannel-726705) KVM machine creation complete!
	I0416 18:00:18.390684   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetConfigRaw
	I0416 18:00:18.391202   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:18.391372   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:18.391524   70853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 18:00:18.391538   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:18.392869   70853 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 18:00:18.392889   70853 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 18:00:18.392897   70853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 18:00:18.392906   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.395463   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.395909   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.395956   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.396325   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.396487   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.396654   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.396776   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.396966   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.397167   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.397182   70853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 18:00:18.508737   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:00:18.508761   70853 main.go:141] libmachine: Detecting the provisioner...
	I0416 18:00:18.508770   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.512088   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.512524   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.512554   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.512798   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.513191   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.513373   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.513556   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.513762   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.513950   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.513966   70853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 18:00:18.638298   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 18:00:18.638375   70853 main.go:141] libmachine: found compatible host: buildroot
	I0416 18:00:18.638395   70853 main.go:141] libmachine: Provisioning with buildroot...
	I0416 18:00:18.638405   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 18:00:18.638683   70853 buildroot.go:166] provisioning hostname "custom-flannel-726705"
	I0416 18:00:18.638712   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 18:00:18.638919   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.641618   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.641968   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.642023   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.642155   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.642336   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.642511   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.642700   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.642859   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.643006   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.643018   70853 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-726705 && echo "custom-flannel-726705" | sudo tee /etc/hostname
	I0416 18:00:18.776341   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-726705
	
	I0416 18:00:18.776371   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.779051   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.779447   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.779473   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.779835   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.780017   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.780212   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.780390   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.780559   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.780764   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.780787   70853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-726705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-726705/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-726705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:00:18.930369   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:00:18.930397   70853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 18:00:18.930417   70853 buildroot.go:174] setting up certificates
	I0416 18:00:18.930442   70853 provision.go:84] configureAuth start
	I0416 18:00:18.930462   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 18:00:18.930709   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:18.933541   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.933944   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.933973   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.934148   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.936478   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.936792   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.936823   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.936961   70853 provision.go:143] copyHostCerts
	I0416 18:00:18.937009   70853 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 18:00:18.937030   70853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 18:00:18.937107   70853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 18:00:18.937227   70853 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 18:00:18.937238   70853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 18:00:18.937269   70853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 18:00:18.937384   70853 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 18:00:18.937397   70853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 18:00:18.937439   70853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 18:00:18.937524   70853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-726705 san=[127.0.0.1 192.168.72.208 custom-flannel-726705 localhost minikube]
	I0416 18:00:19.043789   70853 provision.go:177] copyRemoteCerts
	I0416 18:00:19.043855   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:00:19.043877   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.047053   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.047441   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.047467   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.047680   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.047872   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.048058   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.048230   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.137908   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 18:00:19.173289   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 18:00:19.203030   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 18:00:19.231087   70853 provision.go:87] duration metric: took 300.626256ms to configureAuth
	I0416 18:00:19.231111   70853 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:00:19.231263   70853 config.go:182] Loaded profile config "custom-flannel-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 18:00:19.231344   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.234018   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.234388   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.234410   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.234633   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.234823   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.234983   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.235100   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.235258   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:19.235467   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:19.235488   70853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 18:00:19.573205   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 18:00:19.573237   70853 main.go:141] libmachine: Checking connection to Docker...
	I0416 18:00:19.573249   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetURL
	I0416 18:00:19.574669   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using libvirt version 6000000
	I0416 18:00:19.577450   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.577845   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.577866   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.578117   70853 main.go:141] libmachine: Docker is up and running!
	I0416 18:00:19.578134   70853 main.go:141] libmachine: Reticulating splines...
	I0416 18:00:19.578142   70853 client.go:171] duration metric: took 32.740795237s to LocalClient.Create
	I0416 18:00:19.578164   70853 start.go:167] duration metric: took 32.740876359s to libmachine.API.Create "custom-flannel-726705"
	I0416 18:00:19.578171   70853 start.go:293] postStartSetup for "custom-flannel-726705" (driver="kvm2")
	I0416 18:00:19.578192   70853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:00:19.578213   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.578527   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:00:19.578557   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.581627   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.582001   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.582035   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.582155   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.582373   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.582592   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.582750   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.677894   70853 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:00:19.683287   70853 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:00:19.683313   70853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 18:00:19.683372   70853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 18:00:19.683481   70853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 18:00:19.683606   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:00:19.698090   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 18:00:19.730549   70853 start.go:296] duration metric: took 152.362629ms for postStartSetup
	I0416 18:00:19.730601   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetConfigRaw
	I0416 18:00:19.731136   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:19.734429   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.734863   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.734894   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.735140   70853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/config.json ...
	I0416 18:00:19.735352   70853 start.go:128] duration metric: took 32.917150472s to createHost
	I0416 18:00:19.735384   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.737918   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.740943   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.740949   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.740972   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.741096   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.741274   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.741376   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.741491   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:19.741673   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:19.741680   70853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 18:00:19.864331   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290419.854414418
	
	I0416 18:00:19.864352   70853 fix.go:216] guest clock: 1713290419.854414418
	I0416 18:00:19.864362   70853 fix.go:229] Guest: 2024-04-16 18:00:19.854414418 +0000 UTC Remote: 2024-04-16 18:00:19.735367817 +0000 UTC m=+33.054203795 (delta=119.046601ms)
	I0416 18:00:19.864382   70853 fix.go:200] guest clock delta is within tolerance: 119.046601ms
	I0416 18:00:19.864388   70853 start.go:83] releasing machines lock for "custom-flannel-726705", held for 33.046308881s
	I0416 18:00:19.864409   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.864729   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:19.867889   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.868531   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.868564   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.868706   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.869178   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.869334   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.869411   70853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:00:19.869450   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.869497   70853 ssh_runner.go:195] Run: cat /version.json
	I0416 18:00:19.869516   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.872781   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873189   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.873218   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873238   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873575   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.873728   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.873753   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873763   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.873960   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.873960   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.874151   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.874149   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.874285   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.874405   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.986748   70853 ssh_runner.go:195] Run: systemctl --version
	I0416 18:00:19.994746   70853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 18:00:20.187776   70853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 18:00:20.197968   70853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:00:20.198041   70853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:00:20.223667   70853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:00:20.223692   70853 start.go:494] detecting cgroup driver to use...
	I0416 18:00:20.223751   70853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:00:20.248144   70853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:20.274630   70853 docker.go:217] disabling cri-docker service (if available) ...
	I0416 18:00:20.274678   70853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 18:00:20.294534   70853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 18:00:20.316881   70853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 18:00:20.466406   70853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 18:00:20.657383   70853 docker.go:233] disabling docker service ...
	I0416 18:00:20.657441   70853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 18:00:20.674201   70853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 18:00:20.692470   70853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 18:00:20.863055   70853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 18:00:21.030723   70853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 18:00:21.053688   70853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:21.083906   70853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 18:00:21.083959   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.097666   70853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 18:00:21.097723   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.115041   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.129013   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.141986   70853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:00:21.156633   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.168821   70853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.191486   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.204574   70853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:00:21.217699   70853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 18:00:21.217750   70853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 18:00:21.236175   70853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:00:21.246548   70853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:21.389640   70853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 18:00:21.597472   70853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 18:00:21.597554   70853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 18:00:21.603212   70853 start.go:562] Will wait 60s for crictl version
	I0416 18:00:21.603268   70853 ssh_runner.go:195] Run: which crictl
	I0416 18:00:21.608408   70853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:00:21.655672   70853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 18:00:21.655764   70853 ssh_runner.go:195] Run: crio --version
	I0416 18:00:21.694526   70853 ssh_runner.go:195] Run: crio --version
	I0416 18:00:21.764735   70853 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 18:00:21.794888   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:21.803868   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:21.804482   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:21.804513   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:21.804608   70853 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0416 18:00:21.813299   70853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:21.832782   70853 kubeadm.go:877] updating cluster {Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 18:00:21.832944   70853 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 18:00:21.833003   70853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 18:00:21.888143   70853 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 18:00:21.888220   70853 ssh_runner.go:195] Run: which lz4
	I0416 18:00:21.893328   70853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 18:00:21.899169   70853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 18:00:21.899199   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 18:00:23.899772   70853 crio.go:462] duration metric: took 2.006466559s to copy over tarball
	I0416 18:00:23.899861   70853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 18:00:27.390646   70853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.490757977s)
	I0416 18:00:27.390669   70853 crio.go:469] duration metric: took 3.490871794s to extract the tarball
	I0416 18:00:27.390677   70853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 18:00:27.441381   70853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 18:00:27.495018   70853 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 18:00:27.495057   70853 cache_images.go:84] Images are preloaded, skipping loading
	I0416 18:00:27.495068   70853 kubeadm.go:928] updating node { 192.168.72.208 8443 v1.29.3 crio true true} ...
	I0416 18:00:27.495184   70853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-726705 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0416 18:00:27.495259   70853 ssh_runner.go:195] Run: crio config
	I0416 18:00:27.564097   70853 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0416 18:00:27.564142   70853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 18:00:27.564168   70853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-726705 NodeName:custom-flannel-726705 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 18:00:27.564327   70853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-726705"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 18:00:27.564398   70853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:27.581946   70853 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 18:00:27.582006   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 18:00:27.596778   70853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0416 18:00:27.620600   70853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:00:27.653618   70853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0416 18:00:27.678176   70853 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0416 18:00:27.683898   70853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:27.706720   70853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:27.882890   70853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:27.907669   70853 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705 for IP: 192.168.72.208
	I0416 18:00:27.907695   70853 certs.go:194] generating shared ca certs ...
	I0416 18:00:27.907713   70853 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:27.907877   70853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 18:00:27.907930   70853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 18:00:27.907938   70853 certs.go:256] generating profile certs ...
	I0416 18:00:27.907997   70853 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.key
	I0416 18:00:27.908010   70853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt with IP's: []
	I0416 18:00:28.048279   70853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt ...
	I0416 18:00:28.048322   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: {Name:mk6b828d2b96effaf22b6c2ec84aebb3f20f7062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.048511   70853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.key ...
	I0416 18:00:28.048531   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.key: {Name:mk030ab0a24a84996e9b36f6aa8cf72fe4a066b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.048644   70853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec
	I0416 18:00:28.048668   70853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.208]
	I0416 18:00:28.211750   70853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec ...
	I0416 18:00:28.211794   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec: {Name:mkc45b78b3318c500079043bbc606f14cac7bb2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.212048   70853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec ...
	I0416 18:00:28.212070   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec: {Name:mkc3f611741decf54b31f39aed75c69de9364b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.212209   70853 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt
	I0416 18:00:28.212340   70853 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key
	I0416 18:00:28.212430   70853 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key
	I0416 18:00:28.212458   70853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt with IP's: []
	I0416 18:00:28.423815   70853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt ...
	I0416 18:00:28.423861   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt: {Name:mkd8a3e50da785097215b10ef7406a9cb8a93c68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.424030   70853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key ...
	I0416 18:00:28.424045   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key: {Name:mk9c4ea8f9e65ca825a1c56c46a015976fc19be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.424282   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 18:00:28.424324   70853 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 18:00:28.424339   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 18:00:28.424371   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 18:00:28.424398   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 18:00:28.424429   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 18:00:28.424484   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 18:00:28.425155   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:00:28.458543   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:00:28.496121   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:00:28.538504   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:00:28.580811   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 18:00:28.635340   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 18:00:28.680240   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 18:00:28.736671   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 18:00:28.769717   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:00:28.802699   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 18:00:28.832540   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 18:00:28.866722   70853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 18:00:28.893303   70853 ssh_runner.go:195] Run: openssl version
	I0416 18:00:28.903622   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 18:00:28.928137   70853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 18:00:28.933963   70853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 18:00:28.934013   70853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 18:00:28.940815   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:00:28.956540   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:00:28.970256   70853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:28.975737   70853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:28.975781   70853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:28.982773   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:00:28.996658   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 18:00:29.009701   70853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 18:00:29.017036   70853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 18:00:29.017092   70853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 18:00:29.025473   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 18:00:29.040792   70853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:00:29.047994   70853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:29.048046   70853 kubeadm.go:391] StartCluster: {Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:00:29.048145   70853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 18:00:29.048205   70853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 18:00:29.102513   70853 cri.go:89] found id: ""
	I0416 18:00:29.102594   70853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 18:00:29.115566   70853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 18:00:29.126909   70853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 18:00:29.138518   70853 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 18:00:29.138535   70853 kubeadm.go:156] found existing configuration files:
	
	I0416 18:00:29.138568   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 18:00:29.149607   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 18:00:29.149655   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 18:00:29.160449   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 18:00:29.171821   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 18:00:29.171875   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 18:00:29.184397   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 18:00:29.197577   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 18:00:29.197631   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 18:00:29.210455   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 18:00:29.220309   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 18:00:29.220370   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 18:00:29.233920   70853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 18:00:29.452954   70853 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 18:00:40.930575   70853 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 18:00:40.930652   70853 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 18:00:40.930747   70853 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 18:00:40.930862   70853 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 18:00:40.930967   70853 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 18:00:40.931041   70853 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 18:00:40.932957   70853 out.go:204]   - Generating certificates and keys ...
	I0416 18:00:40.933049   70853 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 18:00:40.933151   70853 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 18:00:40.933264   70853 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 18:00:40.933354   70853 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 18:00:40.933449   70853 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 18:00:40.933530   70853 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 18:00:40.933617   70853 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 18:00:40.933757   70853 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-726705 localhost] and IPs [192.168.72.208 127.0.0.1 ::1]
	I0416 18:00:40.933808   70853 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 18:00:40.933918   70853 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-726705 localhost] and IPs [192.168.72.208 127.0.0.1 ::1]
	I0416 18:00:40.933995   70853 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 18:00:40.934081   70853 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 18:00:40.934121   70853 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 18:00:40.934193   70853 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 18:00:40.934243   70853 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 18:00:40.934317   70853 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 18:00:40.934390   70853 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 18:00:40.934471   70853 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 18:00:40.934563   70853 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 18:00:40.934672   70853 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 18:00:40.934794   70853 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 18:00:40.936706   70853 out.go:204]   - Booting up control plane ...
	I0416 18:00:40.936820   70853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 18:00:40.936937   70853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 18:00:40.937001   70853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 18:00:40.937094   70853 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:00:40.937166   70853 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:00:40.937204   70853 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 18:00:40.937334   70853 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 18:00:40.937425   70853 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002767 seconds
	I0416 18:00:40.937518   70853 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 18:00:40.937622   70853 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 18:00:40.937693   70853 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 18:00:40.937927   70853 kubeadm.go:309] [mark-control-plane] Marking the node custom-flannel-726705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 18:00:40.937991   70853 kubeadm.go:309] [bootstrap-token] Using token: 1suvyo.23p3gvrlr33x42m0
	I0416 18:00:40.939765   70853 out.go:204]   - Configuring RBAC rules ...
	I0416 18:00:40.939868   70853 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 18:00:40.939960   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 18:00:40.940096   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 18:00:40.940285   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 18:00:40.940458   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 18:00:40.940572   70853 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 18:00:40.940718   70853 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 18:00:40.940777   70853 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 18:00:40.940868   70853 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 18:00:40.940878   70853 kubeadm.go:309] 
	I0416 18:00:40.940960   70853 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 18:00:40.940971   70853 kubeadm.go:309] 
	I0416 18:00:40.941083   70853 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 18:00:40.941092   70853 kubeadm.go:309] 
	I0416 18:00:40.941125   70853 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 18:00:40.941207   70853 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 18:00:40.941272   70853 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 18:00:40.941282   70853 kubeadm.go:309] 
	I0416 18:00:40.941380   70853 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 18:00:40.941390   70853 kubeadm.go:309] 
	I0416 18:00:40.941447   70853 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 18:00:40.941457   70853 kubeadm.go:309] 
	I0416 18:00:40.941526   70853 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 18:00:40.941623   70853 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 18:00:40.941718   70853 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 18:00:40.941728   70853 kubeadm.go:309] 
	I0416 18:00:40.941816   70853 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 18:00:40.941941   70853 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 18:00:40.941953   70853 kubeadm.go:309] 
	I0416 18:00:40.942080   70853 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1suvyo.23p3gvrlr33x42m0 \
	I0416 18:00:40.942239   70853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 18:00:40.942282   70853 kubeadm.go:309] 	--control-plane 
	I0416 18:00:40.942292   70853 kubeadm.go:309] 
	I0416 18:00:40.942384   70853 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 18:00:40.942391   70853 kubeadm.go:309] 
	I0416 18:00:40.942461   70853 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1suvyo.23p3gvrlr33x42m0 \
	I0416 18:00:40.942572   70853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 18:00:40.942591   70853 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0416 18:00:40.945415   70853 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0416 18:00:40.947027   70853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 18:00:40.947087   70853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0416 18:00:40.958786   70853 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0416 18:00:40.958822   70853 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0416 18:00:41.152889   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 18:00:41.678971   70853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 18:00:41.679090   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-726705 minikube.k8s.io/updated_at=2024_04_16T18_00_41_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=custom-flannel-726705 minikube.k8s.io/primary=true
	I0416 18:00:41.679107   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:41.837109   70853 ops.go:34] apiserver oom_adj: -16
	I0416 18:00:41.837596   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:42.337610   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:42.838288   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:43.337867   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:43.838354   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:44.338620   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:44.837869   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:45.337670   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:45.837761   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:46.338607   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:46.838070   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:47.338644   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:47.837960   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:48.338197   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:48.838492   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:49.337914   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:49.838057   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:50.338264   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:50.837668   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:51.338598   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:51.838284   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:52.337707   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:52.838534   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:53.338572   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:53.529231   70853 kubeadm.go:1107] duration metric: took 11.850195459s to wait for elevateKubeSystemPrivileges
	W0416 18:00:53.529271   70853 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 18:00:53.529281   70853 kubeadm.go:393] duration metric: took 24.481237931s to StartCluster
	I0416 18:00:53.529300   70853 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:53.529379   70853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 18:00:53.530285   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:53.530569   70853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 18:00:53.530592   70853 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 18:00:53.533110   70853 out.go:177] * Verifying Kubernetes components...
	I0416 18:00:53.530691   70853 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 18:00:53.530789   70853 config.go:182] Loaded profile config "custom-flannel-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 18:00:53.533189   70853 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-726705"
	I0416 18:00:53.534591   70853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:53.534600   70853 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-726705"
	I0416 18:00:53.534629   70853 host.go:66] Checking if "custom-flannel-726705" exists ...
	I0416 18:00:53.533194   70853 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-726705"
	I0416 18:00:53.534724   70853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-726705"
	I0416 18:00:53.534944   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.534970   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.535102   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.535171   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.549750   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0416 18:00:53.550300   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.550930   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.550954   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.550976   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0416 18:00:53.551285   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.551388   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.551825   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.551844   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.551853   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.551878   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.552241   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.552426   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:53.555816   70853 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-726705"
	I0416 18:00:53.555853   70853 host.go:66] Checking if "custom-flannel-726705" exists ...
	I0416 18:00:53.556124   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.556158   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.568226   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0416 18:00:53.568802   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.569317   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.569344   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.569704   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.569951   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:53.571640   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:53.573579   70853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 18:00:53.572221   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0416 18:00:53.573987   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.574907   70853 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 18:00:53.574923   70853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 18:00:53.574942   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:53.575508   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.575532   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.575895   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.576350   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.576376   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.578417   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.578859   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:53.578875   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.579110   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:53.579290   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:53.579520   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:53.579762   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:53.592985   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0416 18:00:53.593435   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.593913   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.593926   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.594311   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.594607   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:53.596254   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:53.596499   70853 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 18:00:53.596512   70853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 18:00:53.596527   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:53.599489   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.599943   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:53.599957   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.600130   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:53.600310   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:53.600451   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:53.600584   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:53.869166   70853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:53.869219   70853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 18:00:53.958197   70853 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-726705" to be "Ready" ...
	I0416 18:00:53.975876   70853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 18:00:54.104310   70853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 18:00:54.558384   70853 start.go:946] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0416 18:00:54.558489   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.558515   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.558822   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.558838   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.558848   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.558858   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.558903   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Closing plugin on server side
	I0416 18:00:54.559119   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.559137   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.573983   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.574007   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.574268   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.574283   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.574304   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Closing plugin on server side
	I0416 18:00:54.896849   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.896879   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.897233   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.897254   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.897267   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.897276   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.897234   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Closing plugin on server side
	I0416 18:00:54.897504   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.897518   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.899708   70853 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0416 18:00:54.901172   70853 addons.go:505] duration metric: took 1.370487309s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0416 18:00:55.063442   70853 kapi.go:248] "coredns" deployment in "kube-system" namespace and "custom-flannel-726705" context rescaled to 1 replicas
	I0416 18:00:55.962618   70853 node_ready.go:53] node "custom-flannel-726705" has status "Ready":"False"
	I0416 18:00:58.463146   70853 node_ready.go:53] node "custom-flannel-726705" has status "Ready":"False"
	I0416 18:00:58.962576   70853 node_ready.go:49] node "custom-flannel-726705" has status "Ready":"True"
	I0416 18:00:58.962599   70853 node_ready.go:38] duration metric: took 5.004374341s for node "custom-flannel-726705" to be "Ready" ...
	I0416 18:00:58.962608   70853 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:58.972868   70853 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-vxmxv" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:00.981506   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:03.484724   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:05.981091   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:08.480657   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:10.980261   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:13.481804   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:15.979720   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:16.480539   70853 pod_ready.go:92] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.480566   70853 pod_ready.go:81] duration metric: took 17.507670532s for pod "coredns-76f75df574-vxmxv" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.480578   70853 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.486146   70853 pod_ready.go:92] pod "etcd-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.486166   70853 pod_ready.go:81] duration metric: took 5.580976ms for pod "etcd-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.486177   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.491969   70853 pod_ready.go:92] pod "kube-apiserver-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.491988   70853 pod_ready.go:81] duration metric: took 5.803844ms for pod "kube-apiserver-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.491997   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.497031   70853 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.497048   70853 pod_ready.go:81] duration metric: took 5.0462ms for pod "kube-controller-manager-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.497056   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-drjz7" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.501901   70853 pod_ready.go:92] pod "kube-proxy-drjz7" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.501922   70853 pod_ready.go:81] duration metric: took 4.859685ms for pod "kube-proxy-drjz7" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.501931   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.877454   70853 pod_ready.go:92] pod "kube-scheduler-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.877478   70853 pod_ready.go:81] duration metric: took 375.53778ms for pod "kube-scheduler-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.877490   70853 pod_ready.go:38] duration metric: took 17.9148606s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:01:16.877507   70853 api_server.go:52] waiting for apiserver process to appear ...
	I0416 18:01:16.877560   70853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:01:16.896766   70853 api_server.go:72] duration metric: took 23.366138063s to wait for apiserver process to appear ...
	I0416 18:01:16.896789   70853 api_server.go:88] waiting for apiserver healthz status ...
	I0416 18:01:16.896808   70853 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I0416 18:01:16.901964   70853 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I0416 18:01:16.902938   70853 api_server.go:141] control plane version: v1.29.3
	I0416 18:01:16.902960   70853 api_server.go:131] duration metric: took 6.164589ms to wait for apiserver health ...
	I0416 18:01:16.902967   70853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 18:01:17.080912   70853 system_pods.go:59] 7 kube-system pods found
	I0416 18:01:17.080949   70853 system_pods.go:61] "coredns-76f75df574-vxmxv" [e490c26b-3944-42eb-b1df-31f6f943af8d] Running
	I0416 18:01:17.080955   70853 system_pods.go:61] "etcd-custom-flannel-726705" [a1c5ef0f-43a8-4361-ba18-25c9be11932e] Running
	I0416 18:01:17.080958   70853 system_pods.go:61] "kube-apiserver-custom-flannel-726705" [05676c18-da79-44a6-a5bd-0760cb3b9443] Running
	I0416 18:01:17.080961   70853 system_pods.go:61] "kube-controller-manager-custom-flannel-726705" [73db9f1e-a84e-4b86-8d61-6b6635c93bce] Running
	I0416 18:01:17.080964   70853 system_pods.go:61] "kube-proxy-drjz7" [fd2be830-ac99-40b2-9c33-8f58e6bde0af] Running
	I0416 18:01:17.080967   70853 system_pods.go:61] "kube-scheduler-custom-flannel-726705" [931dcb52-df72-4e9b-971f-78dddb76617a] Running
	I0416 18:01:17.080969   70853 system_pods.go:61] "storage-provisioner" [a51f1b5d-557b-4f3c-b76c-909060d453ed] Running
	I0416 18:01:17.080975   70853 system_pods.go:74] duration metric: took 178.002636ms to wait for pod list to return data ...
	I0416 18:01:17.080982   70853 default_sa.go:34] waiting for default service account to be created ...
	I0416 18:01:17.277162   70853 default_sa.go:45] found service account: "default"
	I0416 18:01:17.277186   70853 default_sa.go:55] duration metric: took 196.198426ms for default service account to be created ...
	I0416 18:01:17.277195   70853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 18:01:17.481036   70853 system_pods.go:86] 7 kube-system pods found
	I0416 18:01:17.481062   70853 system_pods.go:89] "coredns-76f75df574-vxmxv" [e490c26b-3944-42eb-b1df-31f6f943af8d] Running
	I0416 18:01:17.481067   70853 system_pods.go:89] "etcd-custom-flannel-726705" [a1c5ef0f-43a8-4361-ba18-25c9be11932e] Running
	I0416 18:01:17.481071   70853 system_pods.go:89] "kube-apiserver-custom-flannel-726705" [05676c18-da79-44a6-a5bd-0760cb3b9443] Running
	I0416 18:01:17.481078   70853 system_pods.go:89] "kube-controller-manager-custom-flannel-726705" [73db9f1e-a84e-4b86-8d61-6b6635c93bce] Running
	I0416 18:01:17.481084   70853 system_pods.go:89] "kube-proxy-drjz7" [fd2be830-ac99-40b2-9c33-8f58e6bde0af] Running
	I0416 18:01:17.481089   70853 system_pods.go:89] "kube-scheduler-custom-flannel-726705" [931dcb52-df72-4e9b-971f-78dddb76617a] Running
	I0416 18:01:17.481095   70853 system_pods.go:89] "storage-provisioner" [a51f1b5d-557b-4f3c-b76c-909060d453ed] Running
	I0416 18:01:17.481104   70853 system_pods.go:126] duration metric: took 203.901975ms to wait for k8s-apps to be running ...
	I0416 18:01:17.481113   70853 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:01:17.481174   70853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:01:17.500649   70853 system_svc.go:56] duration metric: took 19.524367ms WaitForService to wait for kubelet
	I0416 18:01:17.500689   70853 kubeadm.go:576] duration metric: took 23.97006282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:01:17.500724   70853 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:01:17.677648   70853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:01:17.677684   70853 node_conditions.go:123] node cpu capacity is 2
	I0416 18:01:17.677698   70853 node_conditions.go:105] duration metric: took 176.967127ms to run NodePressure ...
	I0416 18:01:17.677710   70853 start.go:240] waiting for startup goroutines ...
	I0416 18:01:17.677719   70853 start.go:245] waiting for cluster config update ...
	I0416 18:01:17.677731   70853 start.go:254] writing updated cluster config ...
	I0416 18:01:17.678030   70853 ssh_runner.go:195] Run: rm -f paused
	I0416 18:01:17.732046   70853 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 18:01:17.734931   70853 out.go:177] * Done! kubectl is now configured to use "custom-flannel-726705" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.254134931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290797254110908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2b3bc39-e860-4504-9734-22474387fae0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.254557518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6edc2d27-396f-4990-bd8a-d4db62e77fac name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.254604118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6edc2d27-396f-4990-bd8a-d4db62e77fac name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.254855320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6edc2d27-396f-4990-bd8a-d4db62e77fac name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.295818383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e75a6f2-d241-4188-adad-8ae3385447f8 name=/runtime.v1.RuntimeService/Version
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.295891200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e75a6f2-d241-4188-adad-8ae3385447f8 name=/runtime.v1.RuntimeService/Version
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.297508936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=570aa958-29c7-4b3e-9f6c-c3d27f3d79c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.298072986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290797298045398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=570aa958-29c7-4b3e-9f6c-c3d27f3d79c2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.298626135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d90d833-e9fe-4491-83c7-3824b448f587 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.298770485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d90d833-e9fe-4491-83c7-3824b448f587 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.298973838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d90d833-e9fe-4491-83c7-3824b448f587 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.339493702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=828ae2a6-117f-45b5-b0fd-bcbdd8870638 name=/runtime.v1.RuntimeService/Version
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.339834205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=828ae2a6-117f-45b5-b0fd-bcbdd8870638 name=/runtime.v1.RuntimeService/Version
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.341573693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3133982f-093b-49c2-92c1-9b7e66d84035 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.342039945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290797342018468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3133982f-093b-49c2-92c1-9b7e66d84035 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.342589518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a53f872-5f2c-46c9-8323-a3683507d06a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.342723747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a53f872-5f2c-46c9-8323-a3683507d06a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.342926270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a53f872-5f2c-46c9-8323-a3683507d06a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.378871724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78ec175b-7191-45d4-89ea-fdd3fb275bba name=/runtime.v1.RuntimeService/Version
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.379052950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78ec175b-7191-45d4-89ea-fdd3fb275bba name=/runtime.v1.RuntimeService/Version
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.381462013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d515af3-cd94-4c5d-80ba-aeb42c701ae9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.382023314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713290797381994670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d515af3-cd94-4c5d-80ba-aeb42c701ae9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.385634205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=449ccc43-5b08-484b-85ec-d173bc23d915 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.385895854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=449ccc43-5b08-484b-85ec-d173bc23d915 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:06:37 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:06:37.386420772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=449ccc43-5b08-484b-85ec-d173bc23d915 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	966b6aa466077       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6319f45ad8d09       coredns-76f75df574-v6dwd
	59869f79e95be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   02fa8abeeed31       coredns-76f75df574-2td7t
	3fcdda7db7fdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   abe7cfb396097       storage-provisioner
	8bd8bce7ea296       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   b98107d87af93       kube-proxy-lg46q
	6ac416d88f5c5       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   c401011c9d45f       kube-apiserver-default-k8s-diff-port-304316
	b325eb7ba09a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   1aa7c16860ef9       etcd-default-k8s-diff-port-304316
	fcd49b74b51a0       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   f28b5381a35a7       kube-controller-manager-default-k8s-diff-port-304316
	4e37c3d94e0ab       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   ab645c7d517ed       kube-scheduler-default-k8s-diff-port-304316
	21b906fbaf579       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Exited              kube-apiserver            1                   ab47a605f6be1       kube-apiserver-default-k8s-diff-port-304316
	
	
	==> coredns [59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-304316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-304316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=default-k8s-diff-port-304316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_57_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:57:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-304316
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:06:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:02:42 +0000   Tue, 16 Apr 2024 17:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:02:42 +0000   Tue, 16 Apr 2024 17:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:02:42 +0000   Tue, 16 Apr 2024 17:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:02:42 +0000   Tue, 16 Apr 2024 17:57:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    default-k8s-diff-port-304316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f770da22fc9843ffb7224882cc8739f2
	  System UUID:                f770da22-fc98-43ff-b722-4882cc8739f2
	  Boot ID:                    806d383a-4938-4633-8296-80747352de96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-2td7t                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 coredns-76f75df574-v6dwd                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-default-k8s-diff-port-304316                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-304316             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-304316    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-lg46q                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-304316             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-qv9w5                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node default-k8s-diff-port-304316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node default-k8s-diff-port-304316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node default-k8s-diff-port-304316 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s  kubelet          Node default-k8s-diff-port-304316 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s  kubelet          Node default-k8s-diff-port-304316 status is now: NodeReady
	  Normal  RegisteredNode           9m10s  node-controller  Node default-k8s-diff-port-304316 event: Registered Node default-k8s-diff-port-304316 in Controller
	
	
	==> dmesg <==
	[  +0.052065] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.729055] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.610662] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.474801] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 17:52] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.060317] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067815] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.167641] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.156969] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.318911] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +5.128503] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.060264] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.158581] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +5.597438] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.671174] kauditd_printk_skb: 84 callbacks suppressed
	[Apr16 17:57] systemd-fstab-generator[3627]: Ignoring "noauto" option for root device
	[  +0.068183] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.769227] systemd-fstab-generator[3952]: Ignoring "noauto" option for root device
	[  +0.080159] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.494755] systemd-fstab-generator[4166]: Ignoring "noauto" option for root device
	[  +0.094740] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.274379] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920] <==
	{"level":"warn","ts":"2024-04-16T17:57:33.195643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.65237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T17:57:33.195767Z","caller":"traceutil/trace.go:171","msg":"trace[1090401179] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:410; }","duration":"196.846599ms","start":"2024-04-16T17:57:32.998912Z","end":"2024-04-16T17:57:33.195758Z","steps":["trace[1090401179] 'agreement among raft nodes before linearized reading'  (duration: 196.66611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:57:33.654512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.187121ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11349228188295860157 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:385 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/metrics-server\" value_size:781 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-16T17:57:33.654759Z","caller":"traceutil/trace.go:171","msg":"trace[591873344] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:423; }","duration":"437.487744ms","start":"2024-04-16T17:57:33.217253Z","end":"2024-04-16T17:57:33.654741Z","steps":["trace[591873344] 'read index received'  (duration: 245.009317ms)","trace[591873344] 'applied index is now lower than readState.Index'  (duration: 192.476128ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:57:33.654898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"437.633452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-76f75df574-2td7t\" ","response":"range_response_count:1 size:4679"}
	{"level":"info","ts":"2024-04-16T17:57:33.654973Z","caller":"traceutil/trace.go:171","msg":"trace[476989735] range","detail":"{range_begin:/registry/pods/kube-system/coredns-76f75df574-2td7t; range_end:; response_count:1; response_revision:413; }","duration":"437.77151ms","start":"2024-04-16T17:57:33.217189Z","end":"2024-04-16T17:57:33.654961Z","steps":["trace[476989735] 'agreement among raft nodes before linearized reading'  (duration: 437.602969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:57:33.655009Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.217175Z","time spent":"437.821951ms","remote":"127.0.0.1:55574","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4701,"request content":"key:\"/registry/pods/kube-system/coredns-76f75df574-2td7t\" "}
	{"level":"info","ts":"2024-04-16T17:57:33.655347Z","caller":"traceutil/trace.go:171","msg":"trace[220973017] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"449.712345ms","start":"2024-04-16T17:57:33.205617Z","end":"2024-04-16T17:57:33.655329Z","steps":["trace[220973017] 'process raft request'  (duration: 256.634796ms)","trace[220973017] 'compare'  (duration: 191.764441ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:57:33.655472Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.205604Z","time spent":"449.816138ms","remote":"127.0.0.1:55546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":844,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:385 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/metrics-server\" value_size:781 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"info","ts":"2024-04-16T17:57:33.838289Z","caller":"traceutil/trace.go:171","msg":"trace[340670038] linearizableReadLoop","detail":"{readStateIndex:426; appliedIndex:424; }","duration":"177.868364ms","start":"2024-04-16T17:57:33.660407Z","end":"2024-04-16T17:57:33.838275Z","steps":["trace[340670038] 'read index received'  (duration: 116.822838ms)","trace[340670038] 'applied index is now lower than readState.Index'  (duration: 61.044992ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:57:33.838618Z","caller":"traceutil/trace.go:171","msg":"trace[868543303] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"607.677132ms","start":"2024-04-16T17:57:33.230885Z","end":"2024-04-16T17:57:33.838562Z","steps":["trace[868543303] 'process raft request'  (duration: 546.403005ms)","trace[868543303] 'compare'  (duration: 60.855512ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:57:33.838898Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.230864Z","time spent":"607.964934ms","remote":"127.0.0.1:55574","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4731,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-76f75df574-v6dwd\" mod_revision:344 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-76f75df574-v6dwd\" value_size:4672 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-76f75df574-v6dwd\" > >"}
	{"level":"info","ts":"2024-04-16T17:57:33.839059Z","caller":"traceutil/trace.go:171","msg":"trace[1778091132] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"535.420986ms","start":"2024-04-16T17:57:33.303626Z","end":"2024-04-16T17:57:33.839047Z","steps":["trace[1778091132] 'process raft request'  (duration: 534.604346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:57:33.839159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.303608Z","time spent":"535.512167ms","remote":"127.0.0.1:55658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-vjahiivyujr42mxoz5nm4ho5kq\" mod_revision:276 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-vjahiivyujr42mxoz5nm4ho5kq\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-vjahiivyujr42mxoz5nm4ho5kq\" > >"}
	{"level":"warn","ts":"2024-04-16T17:57:33.838913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.487556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-304316\" ","response":"range_response_count:1 size:5764"}
	{"level":"info","ts":"2024-04-16T17:57:33.839429Z","caller":"traceutil/trace.go:171","msg":"trace[1274234003] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-304316; range_end:; response_count:1; response_revision:415; }","duration":"179.040346ms","start":"2024-04-16T17:57:33.660378Z","end":"2024-04-16T17:57:33.839418Z","steps":["trace[1274234003] 'agreement among raft nodes before linearized reading'  (duration: 178.458545ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:58:54.639509Z","caller":"traceutil/trace.go:171","msg":"trace[296041862] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"120.320916ms","start":"2024-04-16T17:58:54.519156Z","end":"2024-04-16T17:58:54.639477Z","steps":["trace[296041862] 'process raft request'  (duration: 60.017247ms)","trace[296041862] 'compare'  (duration: 60.129945ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:58:56.632314Z","caller":"traceutil/trace.go:171","msg":"trace[1425136071] linearizableReadLoop","detail":"{readStateIndex:541; appliedIndex:540; }","duration":"118.624768ms","start":"2024-04-16T17:58:56.513576Z","end":"2024-04-16T17:58:56.632201Z","steps":["trace[1425136071] 'read index received'  (duration: 56.605497ms)","trace[1425136071] 'applied index is now lower than readState.Index'  (duration: 62.017552ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:58:56.633481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.648991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-04-16T17:58:56.633814Z","caller":"traceutil/trace.go:171","msg":"trace[830659573] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:512; }","duration":"120.282613ms","start":"2024-04-16T17:58:56.513515Z","end":"2024-04-16T17:58:56.633798Z","steps":["trace[830659573] 'agreement among raft nodes before linearized reading'  (duration: 119.280568ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:58:56.634803Z","caller":"traceutil/trace.go:171","msg":"trace[1294242776] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"201.718679ms","start":"2024-04-16T17:58:56.43288Z","end":"2024-04-16T17:58:56.634598Z","steps":["trace[1294242776] 'process raft request'  (duration: 137.351715ms)","trace[1294242776] 'compare'  (duration: 60.926204ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T18:00:05.372446Z","caller":"traceutil/trace.go:171","msg":"trace[447696603] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"120.36715ms","start":"2024-04-16T18:00:05.252029Z","end":"2024-04-16T18:00:05.372396Z","steps":["trace[447696603] 'process raft request'  (duration: 120.235539ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T18:00:27.658075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.48509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-04-16T18:00:27.658356Z","caller":"traceutil/trace.go:171","msg":"trace[1345281850] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:590; }","duration":"130.999019ms","start":"2024-04-16T18:00:27.527329Z","end":"2024-04-16T18:00:27.658328Z","steps":["trace[1345281850] 'range keys from in-memory index tree'  (duration: 130.343262ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T18:00:27.877392Z","caller":"traceutil/trace.go:171","msg":"trace[576843287] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"213.325397ms","start":"2024-04-16T18:00:27.664045Z","end":"2024-04-16T18:00:27.87737Z","steps":["trace[576843287] 'process raft request'  (duration: 213.167781ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:06:37 up 14 min,  0 users,  load average: 0.11, 0.16, 0.13
	Linux default-k8s-diff-port-304316 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8] <==
	W0416 17:56:58.666306       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.708012       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.709471       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.816057       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.883149       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.922122       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.955291       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.983025       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.048918       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.076194       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.111405       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.118593       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.275775       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.315572       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.358615       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.381956       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.601800       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.610153       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.615432       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.939986       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.970863       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:00.036246       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:00.695360       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:05.091184       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:05.314121       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b] <==
	I0416 18:00:31.644861       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:02:12.736892       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:02:12.736998       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 18:02:13.737957       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:02:13.738015       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 18:02:13.738025       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:02:13.738077       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:02:13.738125       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 18:02:13.739293       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:03:13.739201       1 handler_proxy.go:93] no RequestInfo found in the context
	W0416 18:03:13.739474       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:03:13.739535       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 18:03:13.739560       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0416 18:03:13.739601       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 18:03:13.740757       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:05:13.739966       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:05:13.740278       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 18:05:13.740310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:05:13.741185       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:05:13.741240       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 18:05:13.742445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32] <==
	I0416 18:00:58.538957       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:01:28.037613       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:01:28.548281       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:01:58.043273       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:01:58.558867       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:02:28.049907       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:02:28.568953       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:02:58.060475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:02:58.578950       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 18:03:26.416181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="313.82µs"
	E0416 18:03:28.065816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:03:28.591143       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 18:03:41.410170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="184.657µs"
	E0416 18:03:58.070904       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:03:58.600788       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:04:28.076545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:04:28.609100       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:04:58.081206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:04:58.617926       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:05:28.087116       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:05:28.632118       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:05:58.093167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:05:58.640124       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:06:28.098536       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:06:28.650209       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65] <==
	I0416 17:57:31.177180       1 server_others.go:72] "Using iptables proxy"
	I0416 17:57:31.194356       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0416 17:57:31.298863       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:57:31.298890       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:57:31.298912       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:57:31.304041       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:57:31.304852       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:57:31.304895       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:57:31.308851       1 config.go:188] "Starting service config controller"
	I0416 17:57:31.308906       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:57:31.308940       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:57:31.308971       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:57:31.324277       1 config.go:315] "Starting node config controller"
	I0416 17:57:31.324293       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:57:31.415836       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:57:31.415901       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:57:31.424868       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292] <==
	W0416 17:57:13.658783       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.658856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.739486       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:57:13.739549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:57:13.807163       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:57:13.807187       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:57:13.816081       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 17:57:13.816215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 17:57:13.835048       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:57:13.835126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:57:13.881159       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.882723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.942040       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.942280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.983933       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.984269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.984223       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:57:13.984371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:57:14.033834       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:57:14.033888       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:57:14.129069       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:57:14.129339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:57:14.136176       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:57:14.136275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0416 17:57:16.004796       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 18:04:16 default-k8s-diff-port-304316 kubelet[3959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:04:16 default-k8s-diff-port-304316 kubelet[3959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:04:16 default-k8s-diff-port-304316 kubelet[3959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:04:16 default-k8s-diff-port-304316 kubelet[3959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:04:25 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:04:25.395501    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:04:36 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:04:36.396507    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:04:48 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:04:48.394709    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:05:02 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:05:02.395390    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:05:15 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:05:15.395513    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:05:16 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:05:16.462780    3959 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:05:16 default-k8s-diff-port-304316 kubelet[3959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:05:16 default-k8s-diff-port-304316 kubelet[3959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:05:16 default-k8s-diff-port-304316 kubelet[3959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:05:16 default-k8s-diff-port-304316 kubelet[3959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:05:28 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:05:28.394607    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:05:41 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:05:41.395128    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:05:55 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:05:55.396118    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:06:06 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:06:06.398123    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:06:16 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:06:16.465792    3959 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:06:16 default-k8s-diff-port-304316 kubelet[3959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:06:16 default-k8s-diff-port-304316 kubelet[3959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:06:16 default-k8s-diff-port-304316 kubelet[3959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:06:16 default-k8s-diff-port-304316 kubelet[3959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:06:19 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:06:19.395159    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:06:34 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:06:34.394853    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	
	
	==> storage-provisioner [3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf] <==
	I0416 17:57:31.608200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 17:57:31.633204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 17:57:31.633483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 17:57:31.683892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 17:57:31.684058       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-304316_e74b58d7-e061-46b4-bbc0-a983d5d046af!
	I0416 17:57:31.699046       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9622f49e-56fe-44d4-a543-bcc5bd14e470", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-304316_e74b58d7-e061-46b4-bbc0-a983d5d046af became leader
	I0416 17:57:31.804955       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-304316_e74b58d7-e061-46b4-bbc0-a983d5d046af!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qv9w5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 describe pod metrics-server-57f55c9bc5-qv9w5
E0416 18:06:38.638702   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304316 describe pod metrics-server-57f55c9bc5-qv9w5: exit status 1 (58.674538ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qv9w5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-304316 describe pod metrics-server-57f55c9bc5-qv9w5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (393.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0416 18:06:59.119863   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:07:03.890448   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 18:07:10.029999   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 18:07:38.367220   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:07:39.161030   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:07:40.080953   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:07:44.930142   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:08:03.936550   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:08:12.612751   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:08:23.828816   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 18:08:31.617952   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:08:47.733706   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 18:08:50.716978   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:09:02.001996   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:09:18.400319   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:09:46.874636   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 18:09:54.523958   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:09:55.318287   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:10:22.208247   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:10:23.002230   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:10:56.728645   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/auto-726705/client.crt: no such file or directory
E0416 18:11:18.155816   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:11:45.842316   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:11:46.939558   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 18:12:03.889552   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 18:12:10.029935   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 18:12:44.930011   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:13:03.936487   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-16 18:13:10.343403302 +0000 UTC m=+6825.742079836
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304316 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.388µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-304316 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-304316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-304316 logs -n 25: (1.358802798s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | systemctl status kubelet --all                       |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat docker                            |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | docker system info                                   |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo cat                    | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo cat                    | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | cri-dockerd --version                                |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC |                     |
	|         | systemctl status containerd                          |                       |         |                |                     |                     |
	|         | --all --full --no-pager                              |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |                |                     |                     |
	|         | --no-pager                                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo cat                    | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | sudo cat                                             |                       |         |                |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | containerd config dump                               |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | systemctl status crio --all                          |                       |         |                |                     |                     |
	|         | --full --no-pager                                    |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |                |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |                |                     |                     |
	| ssh     | -p custom-flannel-726705 sudo                        | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|         | crio config                                          |                       |         |                |                     |                     |
	| delete  | -p custom-flannel-726705                             | custom-flannel-726705 | jenkins | v1.33.0-beta.0 | 16 Apr 24 18:01 UTC | 16 Apr 24 18:01 UTC |
	|---------|------------------------------------------------------|-----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 17:59:46
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 17:59:46.737039   70853 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:59:46.737277   70853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:59:46.737288   70853 out.go:304] Setting ErrFile to fd 2...
	I0416 17:59:46.737292   70853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:59:46.737445   70853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:59:46.738058   70853 out.go:298] Setting JSON to false
	I0416 17:59:46.739271   70853 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6139,"bootTime":1713284248,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:59:46.739333   70853 start.go:139] virtualization: kvm guest
	I0416 17:59:46.741592   70853 out.go:177] * [custom-flannel-726705] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:59:46.743739   70853 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:59:46.743738   70853 notify.go:220] Checking for updates...
	I0416 17:59:46.745257   70853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:59:46.746786   70853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:59:46.748414   70853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:59:46.749785   70853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:59:46.751168   70853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:59:41.794997   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:42.295463   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:42.795335   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:43.295116   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:43.794569   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:44.295426   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:44.794957   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:45.294982   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:45.795569   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:46.295540   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:46.752805   70853 config.go:182] Loaded profile config "calico-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:46.752943   70853 config.go:182] Loaded profile config "default-k8s-diff-port-304316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:46.753084   70853 config.go:182] Loaded profile config "kindnet-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:46.753210   70853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:59:46.795439   70853 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:59:46.796890   70853 start.go:297] selected driver: kvm2
	I0416 17:59:46.796910   70853 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:59:46.796924   70853 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:59:46.797806   70853 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:59:46.797940   70853 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 17:59:46.813722   70853 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 17:59:46.813801   70853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 17:59:46.814093   70853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:59:46.814179   70853 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0416 17:59:46.814202   70853 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0416 17:59:46.814276   70853 start.go:340] cluster config:
	{Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 17:59:46.814482   70853 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 17:59:46.816050   70853 out.go:177] * Starting "custom-flannel-726705" primary control-plane node in "custom-flannel-726705" cluster
	I0416 17:59:46.817460   70853 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 17:59:46.817509   70853 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 17:59:46.817522   70853 cache.go:56] Caching tarball of preloaded images
	I0416 17:59:46.817628   70853 preload.go:173] Found /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 17:59:46.817642   70853 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 17:59:46.817770   70853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/config.json ...
	I0416 17:59:46.817797   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/config.json: {Name:mkbfcac95f14b1a42efb03c410f579e5b433a3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:59:46.817980   70853 start.go:360] acquireMachinesLock for custom-flannel-726705: {Name:mk8a94aad43bb997463d1a4d11a9eec22146ca4f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 17:59:46.818032   70853 start.go:364] duration metric: took 27.232µs to acquireMachinesLock for "custom-flannel-726705"
	I0416 17:59:46.818093   70853 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:59:46.818189   70853 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 17:59:46.794588   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:47.294648   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:47.794646   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:48.294580   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:48.795088   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:49.294580   68924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 17:59:49.444925   68924 kubeadm.go:1107] duration metric: took 11.352689424s to wait for elevateKubeSystemPrivileges
	W0416 17:59:49.444957   68924 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 17:59:49.444967   68924 kubeadm.go:393] duration metric: took 24.231281676s to StartCluster
	I0416 17:59:49.445018   68924 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:59:49.445092   68924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:59:49.446709   68924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 17:59:49.446908   68924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 17:59:49.446918   68924 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.229 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 17:59:49.448948   68924 out.go:177] * Verifying Kubernetes components...
	I0416 17:59:49.447005   68924 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 17:59:49.447104   68924 config.go:182] Loaded profile config "kindnet-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:59:49.450307   68924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 17:59:49.448995   68924 addons.go:69] Setting storage-provisioner=true in profile "kindnet-726705"
	I0416 17:59:49.450365   68924 addons.go:234] Setting addon storage-provisioner=true in "kindnet-726705"
	I0416 17:59:49.450400   68924 host.go:66] Checking if "kindnet-726705" exists ...
	I0416 17:59:49.449011   68924 addons.go:69] Setting default-storageclass=true in profile "kindnet-726705"
	I0416 17:59:49.450440   68924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-726705"
	I0416 17:59:49.450857   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.450891   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.450894   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.450905   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.468111   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0416 17:59:49.468734   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.469344   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.469375   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.469768   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.470351   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.470394   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.471942   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0416 17:59:49.472298   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.472773   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.472794   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.473173   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.473410   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetState
	I0416 17:59:49.477012   68924 addons.go:234] Setting addon default-storageclass=true in "kindnet-726705"
	I0416 17:59:49.477051   68924 host.go:66] Checking if "kindnet-726705" exists ...
	I0416 17:59:49.477360   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.477399   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.497819   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0416 17:59:49.498275   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.499872   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I0416 17:59:49.500355   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.500562   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.500575   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.500887   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.500901   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.501078   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.501217   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetState
	I0416 17:59:49.502197   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.502870   68924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:49.502909   68924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:49.503323   68924 main.go:141] libmachine: (kindnet-726705) Calling .DriverName
	I0416 17:59:49.508052   68924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 17:59:49.509525   68924 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:59:49.509538   68924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 17:59:49.509552   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHHostname
	I0416 17:59:49.512823   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.513313   68924 main.go:141] libmachine: (kindnet-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:aa:d0", ip: ""} in network mk-kindnet-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:59:06 +0000 UTC Type:0 Mac:52:54:00:13:aa:d0 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:kindnet-726705 Clientid:01:52:54:00:13:aa:d0}
	I0416 17:59:49.513329   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined IP address 192.168.61.229 and MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.513477   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHPort
	I0416 17:59:49.513617   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHKeyPath
	I0416 17:59:49.513768   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHUsername
	I0416 17:59:49.513897   68924 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kindnet-726705/id_rsa Username:docker}
	I0416 17:59:49.524922   68924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0416 17:59:49.526939   68924 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:49.527464   68924 main.go:141] libmachine: Using API Version  1
	I0416 17:59:49.527507   68924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:49.527907   68924 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:49.528075   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetState
	I0416 17:59:49.529952   68924 main.go:141] libmachine: (kindnet-726705) Calling .DriverName
	I0416 17:59:49.530222   68924 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 17:59:49.530235   68924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 17:59:49.530247   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHHostname
	I0416 17:59:49.533336   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.533827   68924 main.go:141] libmachine: (kindnet-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:aa:d0", ip: ""} in network mk-kindnet-726705: {Iface:virbr4 ExpiryTime:2024-04-16 18:59:06 +0000 UTC Type:0 Mac:52:54:00:13:aa:d0 Iaid: IPaddr:192.168.61.229 Prefix:24 Hostname:kindnet-726705 Clientid:01:52:54:00:13:aa:d0}
	I0416 17:59:49.533855   68924 main.go:141] libmachine: (kindnet-726705) DBG | domain kindnet-726705 has defined IP address 192.168.61.229 and MAC address 52:54:00:13:aa:d0 in network mk-kindnet-726705
	I0416 17:59:49.534038   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHPort
	I0416 17:59:49.539432   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHKeyPath
	I0416 17:59:49.539652   68924 main.go:141] libmachine: (kindnet-726705) Calling .GetSSHUsername
	I0416 17:59:49.539788   68924 sshutil.go:53] new ssh client: &{IP:192.168.61.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/kindnet-726705/id_rsa Username:docker}
	I0416 17:59:49.693551   68924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 17:59:49.745621   68924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 17:59:49.890557   68924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 17:59:49.946357   68924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 17:59:50.441284   68924 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0416 17:59:50.442672   68924 node_ready.go:35] waiting up to 15m0s for node "kindnet-726705" to be "Ready" ...
	I0416 17:59:50.869723   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.869748   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.869835   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.869859   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.870259   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.870277   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.870286   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.870292   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.870321   68924 main.go:141] libmachine: (kindnet-726705) DBG | Closing plugin on server side
	I0416 17:59:50.870374   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.870400   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.870417   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.870424   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.870495   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.870509   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.871922   68924 main.go:141] libmachine: (kindnet-726705) DBG | Closing plugin on server side
	I0416 17:59:50.872341   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.872358   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.885720   68924 main.go:141] libmachine: Making call to close driver server
	I0416 17:59:50.885742   68924 main.go:141] libmachine: (kindnet-726705) Calling .Close
	I0416 17:59:50.886044   68924 main.go:141] libmachine: (kindnet-726705) DBG | Closing plugin on server side
	I0416 17:59:50.886100   68924 main.go:141] libmachine: Successfully made call to close driver server
	I0416 17:59:50.886116   68924 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 17:59:50.887729   68924 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 17:59:47.846690   67680 pod_ready.go:102] pod "calico-kube-controllers-787f445f84-b4whw" in "kube-system" namespace has status "Ready":"False"
	I0416 17:59:49.347410   67680 pod_ready.go:92] pod "calico-kube-controllers-787f445f84-b4whw" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:49.347433   67680 pod_ready.go:81] duration metric: took 17.008193947s for pod "calico-kube-controllers-787f445f84-b4whw" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:49.347443   67680 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-bkzqr" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:51.356403   67680 pod_ready.go:102] pod "calico-node-bkzqr" in "kube-system" namespace has status "Ready":"False"
	I0416 17:59:46.819995   70853 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0416 17:59:46.820151   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:59:46.820183   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:59:46.835144   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0416 17:59:46.835599   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:59:46.836197   70853 main.go:141] libmachine: Using API Version  1
	I0416 17:59:46.836221   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:59:46.836591   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:59:46.836887   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 17:59:46.837116   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 17:59:46.837290   70853 start.go:159] libmachine.API.Create for "custom-flannel-726705" (driver="kvm2")
	I0416 17:59:46.837336   70853 client.go:168] LocalClient.Create starting
	I0416 17:59:46.837378   70853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem
	I0416 17:59:46.837407   70853 main.go:141] libmachine: Decoding PEM data...
	I0416 17:59:46.837427   70853 main.go:141] libmachine: Parsing certificate...
	I0416 17:59:46.837495   70853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem
	I0416 17:59:46.837523   70853 main.go:141] libmachine: Decoding PEM data...
	I0416 17:59:46.837540   70853 main.go:141] libmachine: Parsing certificate...
	I0416 17:59:46.837564   70853 main.go:141] libmachine: Running pre-create checks...
	I0416 17:59:46.837576   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .PreCreateCheck
	I0416 17:59:46.837968   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetConfigRaw
	I0416 17:59:46.838423   70853 main.go:141] libmachine: Creating machine...
	I0416 17:59:46.838443   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Create
	I0416 17:59:46.838619   70853 main.go:141] libmachine: (custom-flannel-726705) Creating KVM machine...
	I0416 17:59:46.840144   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found existing default KVM network
	I0416 17:59:46.841610   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.841449   70875 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:c7:a6} reservation:<nil>}
	I0416 17:59:46.842808   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.842688   70875 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:69:c5} reservation:<nil>}
	I0416 17:59:46.844258   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.844154   70875 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:7d:cc} reservation:<nil>}
	I0416 17:59:46.845657   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.845576   70875 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002bd950}
	I0416 17:59:46.845695   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | created network xml: 
	I0416 17:59:46.845721   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | <network>
	I0416 17:59:46.845736   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   <name>mk-custom-flannel-726705</name>
	I0416 17:59:46.845746   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   <dns enable='no'/>
	I0416 17:59:46.845756   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   
	I0416 17:59:46.845775   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0416 17:59:46.845804   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |     <dhcp>
	I0416 17:59:46.845833   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0416 17:59:46.845847   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |     </dhcp>
	I0416 17:59:46.845858   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   </ip>
	I0416 17:59:46.845868   70853 main.go:141] libmachine: (custom-flannel-726705) DBG |   
	I0416 17:59:46.845878   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | </network>
	I0416 17:59:46.845889   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | 
	I0416 17:59:46.851387   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | trying to create private KVM network mk-custom-flannel-726705 192.168.72.0/24...
	I0416 17:59:46.934476   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | private KVM network mk-custom-flannel-726705 192.168.72.0/24 created
	I0416 17:59:46.934633   70853 main.go:141] libmachine: (custom-flannel-726705) Setting up store path in /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705 ...
	I0416 17:59:46.934719   70853 main.go:141] libmachine: (custom-flannel-726705) Building disk image from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 17:59:46.944957   70853 main.go:141] libmachine: (custom-flannel-726705) Downloading /home/jenkins/minikube-integration/18649-3628/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso...
	I0416 17:59:46.945028   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:46.934885   70875 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:59:47.192405   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:47.192219   70875 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa...
	I0416 17:59:47.463349   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:47.463160   70875 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/custom-flannel-726705.rawdisk...
	I0416 17:59:47.463392   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Writing magic tar header
	I0416 17:59:47.463446   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Writing SSH key tar header
	I0416 17:59:47.463466   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:47.463300   70875 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705 ...
	I0416 17:59:47.463481   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705 (perms=drwx------)
	I0416 17:59:47.463506   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705
	I0416 17:59:47.463529   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube/machines
	I0416 17:59:47.463564   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:59:47.463591   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube/machines (perms=drwxr-xr-x)
	I0416 17:59:47.463615   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18649-3628
	I0416 17:59:47.463669   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 17:59:47.463693   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628/.minikube (perms=drwxr-xr-x)
	I0416 17:59:47.463711   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration/18649-3628 (perms=drwxrwxr-x)
	I0416 17:59:47.463746   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home/jenkins
	I0416 17:59:47.463777   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Checking permissions on dir: /home
	I0416 17:59:47.463795   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Skipping /home - not owner
	I0416 17:59:47.463813   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 17:59:47.463860   70853 main.go:141] libmachine: (custom-flannel-726705) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 17:59:47.463885   70853 main.go:141] libmachine: (custom-flannel-726705) Creating domain...
	I0416 17:59:47.464917   70853 main.go:141] libmachine: (custom-flannel-726705) define libvirt domain using xml: 
	I0416 17:59:47.464942   70853 main.go:141] libmachine: (custom-flannel-726705) <domain type='kvm'>
	I0416 17:59:47.464953   70853 main.go:141] libmachine: (custom-flannel-726705)   <name>custom-flannel-726705</name>
	I0416 17:59:47.464965   70853 main.go:141] libmachine: (custom-flannel-726705)   <memory unit='MiB'>3072</memory>
	I0416 17:59:47.464975   70853 main.go:141] libmachine: (custom-flannel-726705)   <vcpu>2</vcpu>
	I0416 17:59:47.467214   70853 main.go:141] libmachine: (custom-flannel-726705)   <features>
	I0416 17:59:47.467239   70853 main.go:141] libmachine: (custom-flannel-726705)     <acpi/>
	I0416 17:59:47.467247   70853 main.go:141] libmachine: (custom-flannel-726705)     <apic/>
	I0416 17:59:47.467255   70853 main.go:141] libmachine: (custom-flannel-726705)     <pae/>
	I0416 17:59:47.467263   70853 main.go:141] libmachine: (custom-flannel-726705)     
	I0416 17:59:47.467295   70853 main.go:141] libmachine: (custom-flannel-726705)   </features>
	I0416 17:59:47.467315   70853 main.go:141] libmachine: (custom-flannel-726705)   <cpu mode='host-passthrough'>
	I0416 17:59:47.467326   70853 main.go:141] libmachine: (custom-flannel-726705)   
	I0416 17:59:47.467335   70853 main.go:141] libmachine: (custom-flannel-726705)   </cpu>
	I0416 17:59:47.467349   70853 main.go:141] libmachine: (custom-flannel-726705)   <os>
	I0416 17:59:47.467360   70853 main.go:141] libmachine: (custom-flannel-726705)     <type>hvm</type>
	I0416 17:59:47.467368   70853 main.go:141] libmachine: (custom-flannel-726705)     <boot dev='cdrom'/>
	I0416 17:59:47.467378   70853 main.go:141] libmachine: (custom-flannel-726705)     <boot dev='hd'/>
	I0416 17:59:47.467385   70853 main.go:141] libmachine: (custom-flannel-726705)     <bootmenu enable='no'/>
	I0416 17:59:47.467395   70853 main.go:141] libmachine: (custom-flannel-726705)   </os>
	I0416 17:59:47.467403   70853 main.go:141] libmachine: (custom-flannel-726705)   <devices>
	I0416 17:59:47.467419   70853 main.go:141] libmachine: (custom-flannel-726705)     <disk type='file' device='cdrom'>
	I0416 17:59:47.467460   70853 main.go:141] libmachine: (custom-flannel-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/boot2docker.iso'/>
	I0416 17:59:47.467475   70853 main.go:141] libmachine: (custom-flannel-726705)       <target dev='hdc' bus='scsi'/>
	I0416 17:59:47.467485   70853 main.go:141] libmachine: (custom-flannel-726705)       <readonly/>
	I0416 17:59:47.467493   70853 main.go:141] libmachine: (custom-flannel-726705)     </disk>
	I0416 17:59:47.467503   70853 main.go:141] libmachine: (custom-flannel-726705)     <disk type='file' device='disk'>
	I0416 17:59:47.467513   70853 main.go:141] libmachine: (custom-flannel-726705)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 17:59:47.467528   70853 main.go:141] libmachine: (custom-flannel-726705)       <source file='/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/custom-flannel-726705.rawdisk'/>
	I0416 17:59:47.467537   70853 main.go:141] libmachine: (custom-flannel-726705)       <target dev='hda' bus='virtio'/>
	I0416 17:59:47.467546   70853 main.go:141] libmachine: (custom-flannel-726705)     </disk>
	I0416 17:59:47.467553   70853 main.go:141] libmachine: (custom-flannel-726705)     <interface type='network'>
	I0416 17:59:47.467564   70853 main.go:141] libmachine: (custom-flannel-726705)       <source network='mk-custom-flannel-726705'/>
	I0416 17:59:47.467572   70853 main.go:141] libmachine: (custom-flannel-726705)       <model type='virtio'/>
	I0416 17:59:47.467581   70853 main.go:141] libmachine: (custom-flannel-726705)     </interface>
	I0416 17:59:47.467589   70853 main.go:141] libmachine: (custom-flannel-726705)     <interface type='network'>
	I0416 17:59:47.467599   70853 main.go:141] libmachine: (custom-flannel-726705)       <source network='default'/>
	I0416 17:59:47.467607   70853 main.go:141] libmachine: (custom-flannel-726705)       <model type='virtio'/>
	I0416 17:59:47.467617   70853 main.go:141] libmachine: (custom-flannel-726705)     </interface>
	I0416 17:59:47.467625   70853 main.go:141] libmachine: (custom-flannel-726705)     <serial type='pty'>
	I0416 17:59:47.467635   70853 main.go:141] libmachine: (custom-flannel-726705)       <target port='0'/>
	I0416 17:59:47.467642   70853 main.go:141] libmachine: (custom-flannel-726705)     </serial>
	I0416 17:59:47.467651   70853 main.go:141] libmachine: (custom-flannel-726705)     <console type='pty'>
	I0416 17:59:47.467658   70853 main.go:141] libmachine: (custom-flannel-726705)       <target type='serial' port='0'/>
	I0416 17:59:47.467666   70853 main.go:141] libmachine: (custom-flannel-726705)     </console>
	I0416 17:59:47.467673   70853 main.go:141] libmachine: (custom-flannel-726705)     <rng model='virtio'>
	I0416 17:59:47.467682   70853 main.go:141] libmachine: (custom-flannel-726705)       <backend model='random'>/dev/random</backend>
	I0416 17:59:47.467688   70853 main.go:141] libmachine: (custom-flannel-726705)     </rng>
	I0416 17:59:47.467696   70853 main.go:141] libmachine: (custom-flannel-726705)     
	I0416 17:59:47.467702   70853 main.go:141] libmachine: (custom-flannel-726705)     
	I0416 17:59:47.467710   70853 main.go:141] libmachine: (custom-flannel-726705)   </devices>
	I0416 17:59:47.467716   70853 main.go:141] libmachine: (custom-flannel-726705) </domain>
	I0416 17:59:47.467726   70853 main.go:141] libmachine: (custom-flannel-726705) 
	I0416 17:59:47.469792   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:98:aa:a1 in network default
	I0416 17:59:47.470411   70853 main.go:141] libmachine: (custom-flannel-726705) Ensuring networks are active...
	I0416 17:59:47.470441   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:47.471079   70853 main.go:141] libmachine: (custom-flannel-726705) Ensuring network default is active
	I0416 17:59:47.471444   70853 main.go:141] libmachine: (custom-flannel-726705) Ensuring network mk-custom-flannel-726705 is active
	I0416 17:59:47.472028   70853 main.go:141] libmachine: (custom-flannel-726705) Getting domain xml...
	I0416 17:59:47.472757   70853 main.go:141] libmachine: (custom-flannel-726705) Creating domain...
	I0416 17:59:48.852467   70853 main.go:141] libmachine: (custom-flannel-726705) Waiting to get IP...
	I0416 17:59:48.853431   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:48.853979   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:48.854006   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:48.853945   70875 retry.go:31] will retry after 254.465483ms: waiting for machine to come up
	I0416 17:59:49.110346   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:49.110986   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:49.111005   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:49.110949   70875 retry.go:31] will retry after 371.607637ms: waiting for machine to come up
	I0416 17:59:49.484459   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:49.484996   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:49.485025   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:49.484944   70875 retry.go:31] will retry after 334.420894ms: waiting for machine to come up
	I0416 17:59:49.821584   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:49.822220   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:49.822247   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:49.822160   70875 retry.go:31] will retry after 480.825723ms: waiting for machine to come up
	I0416 17:59:50.305051   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:50.305564   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:50.305587   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:50.305509   70875 retry.go:31] will retry after 741.101971ms: waiting for machine to come up
	I0416 17:59:51.048684   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:51.049279   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:51.049330   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:51.049239   70875 retry.go:31] will retry after 704.311837ms: waiting for machine to come up
	I0416 17:59:50.889121   68924 addons.go:505] duration metric: took 1.442125693s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 17:59:50.947741   68924 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-726705" context rescaled to 1 replicas
	I0416 17:59:52.870161   67680 pod_ready.go:92] pod "calico-node-bkzqr" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.870190   67680 pod_ready.go:81] duration metric: took 3.522739385s for pod "calico-node-bkzqr" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.870203   67680 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-6nc69" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.887485   67680 pod_ready.go:92] pod "coredns-76f75df574-6nc69" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.887524   67680 pod_ready.go:81] duration metric: took 17.312814ms for pod "coredns-76f75df574-6nc69" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.887540   67680 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.895226   67680 pod_ready.go:92] pod "etcd-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.895256   67680 pod_ready.go:81] duration metric: took 7.706894ms for pod "etcd-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.895268   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.908224   67680 pod_ready.go:92] pod "kube-apiserver-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.908248   67680 pod_ready.go:81] duration metric: took 12.971818ms for pod "kube-apiserver-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.908257   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.915225   67680 pod_ready.go:92] pod "kube-controller-manager-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:52.915255   67680 pod_ready.go:81] duration metric: took 6.989899ms for pod "kube-controller-manager-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:52.915269   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-sjbpp" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.252474   67680 pod_ready.go:92] pod "kube-proxy-sjbpp" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.252498   67680 pod_ready.go:81] duration metric: took 337.222317ms for pod "kube-proxy-sjbpp" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.252507   67680 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.652692   67680 pod_ready.go:92] pod "kube-scheduler-calico-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.652718   67680 pod_ready.go:81] duration metric: took 400.204909ms for pod "kube-scheduler-calico-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.652729   67680 pod_ready.go:38] duration metric: took 21.325723365s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:59:53.652742   67680 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:59:53.652800   67680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:59:53.674956   67680 api_server.go:72] duration metric: took 31.807084154s to wait for apiserver process to appear ...
	I0416 17:59:53.674979   67680 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:59:53.675001   67680 api_server.go:253] Checking apiserver healthz at https://192.168.50.220:8443/healthz ...
	I0416 17:59:53.680630   67680 api_server.go:279] https://192.168.50.220:8443/healthz returned 200:
	ok
	I0416 17:59:53.682214   67680 api_server.go:141] control plane version: v1.29.3
	I0416 17:59:53.682241   67680 api_server.go:131] duration metric: took 7.254451ms to wait for apiserver health ...
	I0416 17:59:53.682252   67680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:59:53.857251   67680 system_pods.go:59] 9 kube-system pods found
	I0416 17:59:53.857292   67680 system_pods.go:61] "calico-kube-controllers-787f445f84-b4whw" [ad66dbea-5e1b-4ec7-a590-e08123083605] Running
	I0416 17:59:53.857299   67680 system_pods.go:61] "calico-node-bkzqr" [d3f35563-9a63-434f-b6e2-c15aecd262f2] Running
	I0416 17:59:53.857303   67680 system_pods.go:61] "coredns-76f75df574-6nc69" [6a801bf3-76c7-4140-950a-9a24bc2aa7d4] Running
	I0416 17:59:53.857307   67680 system_pods.go:61] "etcd-calico-726705" [34a5958f-e21f-4391-a23e-99bec66ee776] Running
	I0416 17:59:53.857310   67680 system_pods.go:61] "kube-apiserver-calico-726705" [399ed59c-b133-4b4c-9d39-ddf42bfc1bf0] Running
	I0416 17:59:53.857313   67680 system_pods.go:61] "kube-controller-manager-calico-726705" [12d86864-17d9-46dc-90f4-53507f21f96e] Running
	I0416 17:59:53.857315   67680 system_pods.go:61] "kube-proxy-sjbpp" [eb7274d2-473c-4ffb-8867-19ac63f3747b] Running
	I0416 17:59:53.857320   67680 system_pods.go:61] "kube-scheduler-calico-726705" [4ea980a4-1603-41ae-aeab-56fefd3ba6e8] Running
	I0416 17:59:53.857323   67680 system_pods.go:61] "storage-provisioner" [8c0046c2-65f4-4571-ae01-ec0c8de967a9] Running
	I0416 17:59:53.857330   67680 system_pods.go:74] duration metric: took 175.07111ms to wait for pod list to return data ...
	I0416 17:59:53.857339   67680 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:59:54.051685   67680 default_sa.go:45] found service account: "default"
	I0416 17:59:54.051710   67680 default_sa.go:55] duration metric: took 194.363862ms for default service account to be created ...
	I0416 17:59:54.051717   67680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:59:54.257885   67680 system_pods.go:86] 9 kube-system pods found
	I0416 17:59:54.257913   67680 system_pods.go:89] "calico-kube-controllers-787f445f84-b4whw" [ad66dbea-5e1b-4ec7-a590-e08123083605] Running
	I0416 17:59:54.257919   67680 system_pods.go:89] "calico-node-bkzqr" [d3f35563-9a63-434f-b6e2-c15aecd262f2] Running
	I0416 17:59:54.257923   67680 system_pods.go:89] "coredns-76f75df574-6nc69" [6a801bf3-76c7-4140-950a-9a24bc2aa7d4] Running
	I0416 17:59:54.257928   67680 system_pods.go:89] "etcd-calico-726705" [34a5958f-e21f-4391-a23e-99bec66ee776] Running
	I0416 17:59:54.257932   67680 system_pods.go:89] "kube-apiserver-calico-726705" [399ed59c-b133-4b4c-9d39-ddf42bfc1bf0] Running
	I0416 17:59:54.257935   67680 system_pods.go:89] "kube-controller-manager-calico-726705" [12d86864-17d9-46dc-90f4-53507f21f96e] Running
	I0416 17:59:54.257939   67680 system_pods.go:89] "kube-proxy-sjbpp" [eb7274d2-473c-4ffb-8867-19ac63f3747b] Running
	I0416 17:59:54.257943   67680 system_pods.go:89] "kube-scheduler-calico-726705" [4ea980a4-1603-41ae-aeab-56fefd3ba6e8] Running
	I0416 17:59:54.257946   67680 system_pods.go:89] "storage-provisioner" [8c0046c2-65f4-4571-ae01-ec0c8de967a9] Running
	I0416 17:59:54.257952   67680 system_pods.go:126] duration metric: took 206.230185ms to wait for k8s-apps to be running ...
	I0416 17:59:54.257958   67680 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:59:54.257999   67680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:59:54.276316   67680 system_svc.go:56] duration metric: took 18.341061ms WaitForService to wait for kubelet
	I0416 17:59:54.276346   67680 kubeadm.go:576] duration metric: took 32.408476431s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:59:54.276369   67680 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:59:54.452764   67680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:59:54.452793   67680 node_conditions.go:123] node cpu capacity is 2
	I0416 17:59:54.452804   67680 node_conditions.go:105] duration metric: took 176.430861ms to run NodePressure ...
	I0416 17:59:54.452815   67680 start.go:240] waiting for startup goroutines ...
	I0416 17:59:54.452821   67680 start.go:245] waiting for cluster config update ...
	I0416 17:59:54.452830   67680 start.go:254] writing updated cluster config ...
	I0416 17:59:54.453132   67680 ssh_runner.go:195] Run: rm -f paused
	I0416 17:59:54.506191   67680 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 17:59:54.507963   67680 out.go:177] * Done! kubectl is now configured to use "calico-726705" cluster and "default" namespace by default
	I0416 17:59:52.450633   68924 node_ready.go:49] node "kindnet-726705" has status "Ready":"True"
	I0416 17:59:52.450657   68924 node_ready.go:38] duration metric: took 2.007953886s for node "kindnet-726705" to be "Ready" ...
	I0416 17:59:52.450668   68924 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:59:52.460469   68924 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-dlv6g" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.971276   68924 pod_ready.go:92] pod "coredns-76f75df574-dlv6g" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.971308   68924 pod_ready.go:81] duration metric: took 1.510812192s for pod "coredns-76f75df574-dlv6g" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.971321   68924 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.977067   68924 pod_ready.go:92] pod "etcd-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.977090   68924 pod_ready.go:81] duration metric: took 5.760643ms for pod "etcd-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.977104   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.982696   68924 pod_ready.go:92] pod "kube-apiserver-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.982717   68924 pod_ready.go:81] duration metric: took 5.604294ms for pod "kube-apiserver-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.982730   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.989121   68924 pod_ready.go:92] pod "kube-controller-manager-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:53.989149   68924 pod_ready.go:81] duration metric: took 6.410855ms for pod "kube-controller-manager-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:53.989158   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-r8xjf" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.046867   68924 pod_ready.go:92] pod "kube-proxy-r8xjf" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:54.046900   68924 pod_ready.go:81] duration metric: took 57.733053ms for pod "kube-proxy-r8xjf" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.046912   68924 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.447711   68924 pod_ready.go:92] pod "kube-scheduler-kindnet-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 17:59:54.447733   68924 pod_ready.go:81] duration metric: took 400.814119ms for pod "kube-scheduler-kindnet-726705" in "kube-system" namespace to be "Ready" ...
	I0416 17:59:54.447743   68924 pod_ready.go:38] duration metric: took 1.997061859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 17:59:54.447756   68924 api_server.go:52] waiting for apiserver process to appear ...
	I0416 17:59:54.447797   68924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:59:54.472955   68924 api_server.go:72] duration metric: took 5.026008338s to wait for apiserver process to appear ...
	I0416 17:59:54.472984   68924 api_server.go:88] waiting for apiserver healthz status ...
	I0416 17:59:54.473004   68924 api_server.go:253] Checking apiserver healthz at https://192.168.61.229:8443/healthz ...
	I0416 17:59:54.481384   68924 api_server.go:279] https://192.168.61.229:8443/healthz returned 200:
	ok
	I0416 17:59:54.482940   68924 api_server.go:141] control plane version: v1.29.3
	I0416 17:59:54.482961   68924 api_server.go:131] duration metric: took 9.970844ms to wait for apiserver health ...
	I0416 17:59:54.482968   68924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 17:59:54.652111   68924 system_pods.go:59] 8 kube-system pods found
	I0416 17:59:54.652157   68924 system_pods.go:61] "coredns-76f75df574-dlv6g" [de29708f-8d27-4aab-ab71-b91614e1a3c8] Running
	I0416 17:59:54.652166   68924 system_pods.go:61] "etcd-kindnet-726705" [19c979dd-a889-40d5-b1cf-7a855ede4f69] Running
	I0416 17:59:54.652171   68924 system_pods.go:61] "kindnet-5vb2l" [4799cba0-132a-44b3-9481-193b7258ced4] Running
	I0416 17:59:54.652177   68924 system_pods.go:61] "kube-apiserver-kindnet-726705" [f9310118-5fb2-4c22-b91a-595dd76e263f] Running
	I0416 17:59:54.652181   68924 system_pods.go:61] "kube-controller-manager-kindnet-726705" [4ea65dee-c5bc-49de-9281-53f5b9a7b161] Running
	I0416 17:59:54.652187   68924 system_pods.go:61] "kube-proxy-r8xjf" [4e5f5faf-31ff-4753-beea-6180b2d560c9] Running
	I0416 17:59:54.652191   68924 system_pods.go:61] "kube-scheduler-kindnet-726705" [bacdbf9d-9c91-40f2-80b4-a468d38fed67] Running
	I0416 17:59:54.652195   68924 system_pods.go:61] "storage-provisioner" [8831c5db-6d7d-475d-ab5b-d44ee3eb48b9] Running
	I0416 17:59:54.652208   68924 system_pods.go:74] duration metric: took 169.233773ms to wait for pod list to return data ...
	I0416 17:59:54.652221   68924 default_sa.go:34] waiting for default service account to be created ...
	I0416 17:59:54.846697   68924 default_sa.go:45] found service account: "default"
	I0416 17:59:54.846732   68924 default_sa.go:55] duration metric: took 194.499936ms for default service account to be created ...
	I0416 17:59:54.846746   68924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 17:59:55.051211   68924 system_pods.go:86] 8 kube-system pods found
	I0416 17:59:55.051243   68924 system_pods.go:89] "coredns-76f75df574-dlv6g" [de29708f-8d27-4aab-ab71-b91614e1a3c8] Running
	I0416 17:59:55.051251   68924 system_pods.go:89] "etcd-kindnet-726705" [19c979dd-a889-40d5-b1cf-7a855ede4f69] Running
	I0416 17:59:55.051258   68924 system_pods.go:89] "kindnet-5vb2l" [4799cba0-132a-44b3-9481-193b7258ced4] Running
	I0416 17:59:55.051264   68924 system_pods.go:89] "kube-apiserver-kindnet-726705" [f9310118-5fb2-4c22-b91a-595dd76e263f] Running
	I0416 17:59:55.051271   68924 system_pods.go:89] "kube-controller-manager-kindnet-726705" [4ea65dee-c5bc-49de-9281-53f5b9a7b161] Running
	I0416 17:59:55.051276   68924 system_pods.go:89] "kube-proxy-r8xjf" [4e5f5faf-31ff-4753-beea-6180b2d560c9] Running
	I0416 17:59:55.051282   68924 system_pods.go:89] "kube-scheduler-kindnet-726705" [bacdbf9d-9c91-40f2-80b4-a468d38fed67] Running
	I0416 17:59:55.051288   68924 system_pods.go:89] "storage-provisioner" [8831c5db-6d7d-475d-ab5b-d44ee3eb48b9] Running
	I0416 17:59:55.051296   68924 system_pods.go:126] duration metric: took 204.543354ms to wait for k8s-apps to be running ...
	I0416 17:59:55.051308   68924 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 17:59:55.051359   68924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:59:55.078146   68924 system_svc.go:56] duration metric: took 26.827432ms WaitForService to wait for kubelet
	I0416 17:59:55.078182   68924 kubeadm.go:576] duration metric: took 5.631238414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 17:59:55.078209   68924 node_conditions.go:102] verifying NodePressure condition ...
	I0416 17:59:55.247278   68924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 17:59:55.247320   68924 node_conditions.go:123] node cpu capacity is 2
	I0416 17:59:55.247333   68924 node_conditions.go:105] duration metric: took 169.118479ms to run NodePressure ...
	I0416 17:59:55.247349   68924 start.go:240] waiting for startup goroutines ...
	I0416 17:59:55.247359   68924 start.go:245] waiting for cluster config update ...
	I0416 17:59:55.247373   68924 start.go:254] writing updated cluster config ...
	I0416 17:59:55.247676   68924 ssh_runner.go:195] Run: rm -f paused
	I0416 17:59:55.298161   68924 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 17:59:55.301083   68924 out.go:177] * Done! kubectl is now configured to use "kindnet-726705" cluster and "default" namespace by default
	I0416 17:59:51.755079   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:51.755551   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:51.755577   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:51.755506   70875 retry.go:31] will retry after 1.109917667s: waiting for machine to come up
	I0416 17:59:52.867274   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:52.868007   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:52.868036   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:52.867933   70875 retry.go:31] will retry after 997.019923ms: waiting for machine to come up
	I0416 17:59:53.866951   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:53.867504   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:53.867537   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:53.867473   70875 retry.go:31] will retry after 1.344016763s: waiting for machine to come up
	I0416 17:59:55.212622   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:55.213188   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:55.213225   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:55.213157   70875 retry.go:31] will retry after 1.719289923s: waiting for machine to come up
	I0416 17:59:56.933873   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:56.934383   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:56.934403   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:56.934336   70875 retry.go:31] will retry after 2.10573305s: waiting for machine to come up
	I0416 17:59:59.041129   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 17:59:59.041566   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 17:59:59.041590   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 17:59:59.041546   70875 retry.go:31] will retry after 2.621818883s: waiting for machine to come up
	I0416 18:00:01.666081   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:01.666695   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 18:00:01.666723   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 18:00:01.666642   70875 retry.go:31] will retry after 3.415105578s: waiting for machine to come up
	I0416 18:00:05.083442   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:05.084006   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 18:00:05.084035   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 18:00:05.083957   70875 retry.go:31] will retry after 3.54402725s: waiting for machine to come up
	I0416 18:00:08.630056   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:08.630600   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find current IP address of domain custom-flannel-726705 in network mk-custom-flannel-726705
	I0416 18:00:08.630645   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | I0416 18:00:08.630549   70875 retry.go:31] will retry after 6.533819056s: waiting for machine to come up
	I0416 18:00:15.165712   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:15.166331   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has current primary IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:15.166363   70853 main.go:141] libmachine: (custom-flannel-726705) Found IP for machine: 192.168.72.208
	I0416 18:00:15.166373   70853 main.go:141] libmachine: (custom-flannel-726705) Reserving static IP address...
	I0416 18:00:15.166663   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find host DHCP lease matching {name: "custom-flannel-726705", mac: "52:54:00:f8:f8:88", ip: "192.168.72.208"} in network mk-custom-flannel-726705
	I0416 18:00:15.242221   70853 main.go:141] libmachine: (custom-flannel-726705) Reserved static IP address: 192.168.72.208
	I0416 18:00:15.242241   70853 main.go:141] libmachine: (custom-flannel-726705) Waiting for SSH to be available...
	I0416 18:00:15.242262   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Getting to WaitForSSH function...
	I0416 18:00:15.244938   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:15.245352   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705
	I0416 18:00:15.245377   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | unable to find defined IP address of network mk-custom-flannel-726705 interface with MAC address 52:54:00:f8:f8:88
	I0416 18:00:15.245535   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH client type: external
	I0416 18:00:15.245558   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa (-rw-------)
	I0416 18:00:15.245601   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 18:00:15.245615   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | About to run SSH command:
	I0416 18:00:15.245648   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | exit 0
	I0416 18:00:15.249441   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | SSH cmd err, output: exit status 255: 
	I0416 18:00:15.249454   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0416 18:00:15.249461   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | command : exit 0
	I0416 18:00:15.249469   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | err     : exit status 255
	I0416 18:00:15.249480   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | output  : 
	I0416 18:00:18.250611   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Getting to WaitForSSH function...
	I0416 18:00:18.253154   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.253605   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.253630   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.253712   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH client type: external
	I0416 18:00:18.253739   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using SSH private key: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa (-rw-------)
	I0416 18:00:18.253780   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 18:00:18.253799   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | About to run SSH command:
	I0416 18:00:18.253813   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | exit 0
	I0416 18:00:18.390225   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | SSH cmd err, output: <nil>: 
	I0416 18:00:18.390421   70853 main.go:141] libmachine: (custom-flannel-726705) KVM machine creation complete!
	I0416 18:00:18.390684   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetConfigRaw
	I0416 18:00:18.391202   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:18.391372   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:18.391524   70853 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 18:00:18.391538   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:18.392869   70853 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 18:00:18.392889   70853 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 18:00:18.392897   70853 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 18:00:18.392906   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.395463   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.395909   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.395956   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.396325   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.396487   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.396654   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.396776   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.396966   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.397167   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.397182   70853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 18:00:18.508737   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:00:18.508761   70853 main.go:141] libmachine: Detecting the provisioner...
	I0416 18:00:18.508770   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.512088   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.512524   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.512554   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.512798   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.513191   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.513373   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.513556   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.513762   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.513950   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.513966   70853 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 18:00:18.638298   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 18:00:18.638375   70853 main.go:141] libmachine: found compatible host: buildroot
	I0416 18:00:18.638395   70853 main.go:141] libmachine: Provisioning with buildroot...
	I0416 18:00:18.638405   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 18:00:18.638683   70853 buildroot.go:166] provisioning hostname "custom-flannel-726705"
	I0416 18:00:18.638712   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 18:00:18.638919   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.641618   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.641968   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.642023   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.642155   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.642336   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.642511   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.642700   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.642859   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.643006   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.643018   70853 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-726705 && echo "custom-flannel-726705" | sudo tee /etc/hostname
	I0416 18:00:18.776341   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-726705
	
	I0416 18:00:18.776371   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.779051   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.779447   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.779473   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.779835   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:18.780017   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.780212   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:18.780390   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:18.780559   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:18.780764   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:18.780787   70853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-726705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-726705/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-726705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 18:00:18.930369   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 18:00:18.930397   70853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18649-3628/.minikube CaCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18649-3628/.minikube}
	I0416 18:00:18.930417   70853 buildroot.go:174] setting up certificates
	I0416 18:00:18.930442   70853 provision.go:84] configureAuth start
	I0416 18:00:18.930462   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetMachineName
	I0416 18:00:18.930709   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:18.933541   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.933944   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.933973   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.934148   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:18.936478   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.936792   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:18.936823   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:18.936961   70853 provision.go:143] copyHostCerts
	I0416 18:00:18.937009   70853 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem, removing ...
	I0416 18:00:18.937030   70853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem
	I0416 18:00:18.937107   70853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/ca.pem (1082 bytes)
	I0416 18:00:18.937227   70853 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem, removing ...
	I0416 18:00:18.937238   70853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem
	I0416 18:00:18.937269   70853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/cert.pem (1123 bytes)
	I0416 18:00:18.937384   70853 exec_runner.go:144] found /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem, removing ...
	I0416 18:00:18.937397   70853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem
	I0416 18:00:18.937439   70853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18649-3628/.minikube/key.pem (1679 bytes)
	I0416 18:00:18.937524   70853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-726705 san=[127.0.0.1 192.168.72.208 custom-flannel-726705 localhost minikube]
	I0416 18:00:19.043789   70853 provision.go:177] copyRemoteCerts
	I0416 18:00:19.043855   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 18:00:19.043877   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.047053   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.047441   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.047467   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.047680   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.047872   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.048058   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.048230   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.137908   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 18:00:19.173289   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 18:00:19.203030   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 18:00:19.231087   70853 provision.go:87] duration metric: took 300.626256ms to configureAuth
	I0416 18:00:19.231111   70853 buildroot.go:189] setting minikube options for container-runtime
	I0416 18:00:19.231263   70853 config.go:182] Loaded profile config "custom-flannel-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 18:00:19.231344   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.234018   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.234388   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.234410   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.234633   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.234823   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.234983   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.235100   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.235258   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:19.235467   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:19.235488   70853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 18:00:19.573205   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 18:00:19.573237   70853 main.go:141] libmachine: Checking connection to Docker...
	I0416 18:00:19.573249   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetURL
	I0416 18:00:19.574669   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Using libvirt version 6000000
	I0416 18:00:19.577450   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.577845   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.577866   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.578117   70853 main.go:141] libmachine: Docker is up and running!
	I0416 18:00:19.578134   70853 main.go:141] libmachine: Reticulating splines...
	I0416 18:00:19.578142   70853 client.go:171] duration metric: took 32.740795237s to LocalClient.Create
	I0416 18:00:19.578164   70853 start.go:167] duration metric: took 32.740876359s to libmachine.API.Create "custom-flannel-726705"
	I0416 18:00:19.578171   70853 start.go:293] postStartSetup for "custom-flannel-726705" (driver="kvm2")
	I0416 18:00:19.578192   70853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 18:00:19.578213   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.578527   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 18:00:19.578557   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.581627   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.582001   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.582035   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.582155   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.582373   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.582592   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.582750   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.677894   70853 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 18:00:19.683287   70853 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 18:00:19.683313   70853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/addons for local assets ...
	I0416 18:00:19.683372   70853 filesync.go:126] Scanning /home/jenkins/minikube-integration/18649-3628/.minikube/files for local assets ...
	I0416 18:00:19.683481   70853 filesync.go:149] local asset: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem -> 109102.pem in /etc/ssl/certs
	I0416 18:00:19.683606   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 18:00:19.698090   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /etc/ssl/certs/109102.pem (1708 bytes)
	I0416 18:00:19.730549   70853 start.go:296] duration metric: took 152.362629ms for postStartSetup
	I0416 18:00:19.730601   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetConfigRaw
	I0416 18:00:19.731136   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:19.734429   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.734863   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.734894   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.735140   70853 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/config.json ...
	I0416 18:00:19.735352   70853 start.go:128] duration metric: took 32.917150472s to createHost
	I0416 18:00:19.735384   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.737918   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.740943   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.740949   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.740972   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.741096   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.741274   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.741376   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.741491   70853 main.go:141] libmachine: Using SSH client type: native
	I0416 18:00:19.741673   70853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0416 18:00:19.741680   70853 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 18:00:19.864331   70853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713290419.854414418
	
	I0416 18:00:19.864352   70853 fix.go:216] guest clock: 1713290419.854414418
	I0416 18:00:19.864362   70853 fix.go:229] Guest: 2024-04-16 18:00:19.854414418 +0000 UTC Remote: 2024-04-16 18:00:19.735367817 +0000 UTC m=+33.054203795 (delta=119.046601ms)
	I0416 18:00:19.864382   70853 fix.go:200] guest clock delta is within tolerance: 119.046601ms
	I0416 18:00:19.864388   70853 start.go:83] releasing machines lock for "custom-flannel-726705", held for 33.046308881s
	I0416 18:00:19.864409   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.864729   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:19.867889   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.868531   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.868564   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.868706   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.869178   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.869334   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:19.869411   70853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 18:00:19.869450   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.869497   70853 ssh_runner.go:195] Run: cat /version.json
	I0416 18:00:19.869516   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:19.872781   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873189   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.873218   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873238   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873575   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.873728   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:19.873753   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:19.873763   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.873960   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.873960   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:19.874151   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:19.874149   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.874285   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:19.874405   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:19.986748   70853 ssh_runner.go:195] Run: systemctl --version
	I0416 18:00:19.994746   70853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 18:00:20.187776   70853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 18:00:20.197968   70853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 18:00:20.198041   70853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 18:00:20.223667   70853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 18:00:20.223692   70853 start.go:494] detecting cgroup driver to use...
	I0416 18:00:20.223751   70853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 18:00:20.248144   70853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 18:00:20.274630   70853 docker.go:217] disabling cri-docker service (if available) ...
	I0416 18:00:20.274678   70853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 18:00:20.294534   70853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 18:00:20.316881   70853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 18:00:20.466406   70853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 18:00:20.657383   70853 docker.go:233] disabling docker service ...
	I0416 18:00:20.657441   70853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 18:00:20.674201   70853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 18:00:20.692470   70853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 18:00:20.863055   70853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 18:00:21.030723   70853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 18:00:21.053688   70853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 18:00:21.083906   70853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 18:00:21.083959   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.097666   70853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 18:00:21.097723   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.115041   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.129013   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.141986   70853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 18:00:21.156633   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.168821   70853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.191486   70853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 18:00:21.204574   70853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 18:00:21.217699   70853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 18:00:21.217750   70853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 18:00:21.236175   70853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 18:00:21.246548   70853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:21.389640   70853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 18:00:21.597472   70853 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 18:00:21.597554   70853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 18:00:21.603212   70853 start.go:562] Will wait 60s for crictl version
	I0416 18:00:21.603268   70853 ssh_runner.go:195] Run: which crictl
	I0416 18:00:21.608408   70853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 18:00:21.655672   70853 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 18:00:21.655764   70853 ssh_runner.go:195] Run: crio --version
	I0416 18:00:21.694526   70853 ssh_runner.go:195] Run: crio --version
	I0416 18:00:21.764735   70853 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 18:00:21.794888   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetIP
	I0416 18:00:21.803868   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:21.804482   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:21.804513   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:21.804608   70853 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0416 18:00:21.813299   70853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:21.832782   70853 kubeadm.go:877] updating cluster {Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 18:00:21.832944   70853 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 18:00:21.833003   70853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 18:00:21.888143   70853 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 18:00:21.888220   70853 ssh_runner.go:195] Run: which lz4
	I0416 18:00:21.893328   70853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 18:00:21.899169   70853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 18:00:21.899199   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 18:00:23.899772   70853 crio.go:462] duration metric: took 2.006466559s to copy over tarball
	I0416 18:00:23.899861   70853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 18:00:27.390646   70853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.490757977s)
	I0416 18:00:27.390669   70853 crio.go:469] duration metric: took 3.490871794s to extract the tarball
	I0416 18:00:27.390677   70853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 18:00:27.441381   70853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 18:00:27.495018   70853 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 18:00:27.495057   70853 cache_images.go:84] Images are preloaded, skipping loading
	I0416 18:00:27.495068   70853 kubeadm.go:928] updating node { 192.168.72.208 8443 v1.29.3 crio true true} ...
	I0416 18:00:27.495184   70853 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-726705 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0416 18:00:27.495259   70853 ssh_runner.go:195] Run: crio config
	I0416 18:00:27.564097   70853 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0416 18:00:27.564142   70853 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 18:00:27.564168   70853 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-726705 NodeName:custom-flannel-726705 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 18:00:27.564327   70853 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-726705"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 18:00:27.564398   70853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 18:00:27.581946   70853 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 18:00:27.582006   70853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 18:00:27.596778   70853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0416 18:00:27.620600   70853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 18:00:27.653618   70853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0416 18:00:27.678176   70853 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0416 18:00:27.683898   70853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 18:00:27.706720   70853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:27.882890   70853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:27.907669   70853 certs.go:68] Setting up /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705 for IP: 192.168.72.208
	I0416 18:00:27.907695   70853 certs.go:194] generating shared ca certs ...
	I0416 18:00:27.907713   70853 certs.go:226] acquiring lock for ca certs: {Name:mkcd512cd3d59d1d7cccadae7f27731fc66f83b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:27.907877   70853 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key
	I0416 18:00:27.907930   70853 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key
	I0416 18:00:27.907938   70853 certs.go:256] generating profile certs ...
	I0416 18:00:27.907997   70853 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.key
	I0416 18:00:27.908010   70853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt with IP's: []
	I0416 18:00:28.048279   70853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt ...
	I0416 18:00:28.048322   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: {Name:mk6b828d2b96effaf22b6c2ec84aebb3f20f7062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.048511   70853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.key ...
	I0416 18:00:28.048531   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.key: {Name:mk030ab0a24a84996e9b36f6aa8cf72fe4a066b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.048644   70853 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec
	I0416 18:00:28.048668   70853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.208]
	I0416 18:00:28.211750   70853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec ...
	I0416 18:00:28.211794   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec: {Name:mkc45b78b3318c500079043bbc606f14cac7bb2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.212048   70853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec ...
	I0416 18:00:28.212070   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec: {Name:mkc3f611741decf54b31f39aed75c69de9364b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.212209   70853 certs.go:381] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt.b7a5c0ec -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt
	I0416 18:00:28.212340   70853 certs.go:385] copying /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key.b7a5c0ec -> /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key
	I0416 18:00:28.212430   70853 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key
	I0416 18:00:28.212458   70853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt with IP's: []
	I0416 18:00:28.423815   70853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt ...
	I0416 18:00:28.423861   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt: {Name:mkd8a3e50da785097215b10ef7406a9cb8a93c68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.424030   70853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key ...
	I0416 18:00:28.424045   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key: {Name:mk9c4ea8f9e65ca825a1c56c46a015976fc19be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:28.424282   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem (1338 bytes)
	W0416 18:00:28.424324   70853 certs.go:480] ignoring /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910_empty.pem, impossibly tiny 0 bytes
	I0416 18:00:28.424339   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca-key.pem (1675 bytes)
	I0416 18:00:28.424371   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/ca.pem (1082 bytes)
	I0416 18:00:28.424398   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/cert.pem (1123 bytes)
	I0416 18:00:28.424429   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/certs/key.pem (1679 bytes)
	I0416 18:00:28.424484   70853 certs.go:484] found cert: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem (1708 bytes)
	I0416 18:00:28.425155   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 18:00:28.458543   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 18:00:28.496121   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 18:00:28.538504   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 18:00:28.580811   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 18:00:28.635340   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 18:00:28.680240   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 18:00:28.736671   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 18:00:28.769717   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 18:00:28.802699   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/certs/10910.pem --> /usr/share/ca-certificates/10910.pem (1338 bytes)
	I0416 18:00:28.832540   70853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/ssl/certs/109102.pem --> /usr/share/ca-certificates/109102.pem (1708 bytes)
	I0416 18:00:28.866722   70853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 18:00:28.893303   70853 ssh_runner.go:195] Run: openssl version
	I0416 18:00:28.903622   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109102.pem && ln -fs /usr/share/ca-certificates/109102.pem /etc/ssl/certs/109102.pem"
	I0416 18:00:28.928137   70853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109102.pem
	I0416 18:00:28.933963   70853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 16 16:29 /usr/share/ca-certificates/109102.pem
	I0416 18:00:28.934013   70853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109102.pem
	I0416 18:00:28.940815   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109102.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 18:00:28.956540   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 18:00:28.970256   70853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:28.975737   70853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 16 16:20 /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:28.975781   70853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 18:00:28.982773   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 18:00:28.996658   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10910.pem && ln -fs /usr/share/ca-certificates/10910.pem /etc/ssl/certs/10910.pem"
	I0416 18:00:29.009701   70853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10910.pem
	I0416 18:00:29.017036   70853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 16 16:29 /usr/share/ca-certificates/10910.pem
	I0416 18:00:29.017092   70853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10910.pem
	I0416 18:00:29.025473   70853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10910.pem /etc/ssl/certs/51391683.0"
	I0416 18:00:29.040792   70853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 18:00:29.047994   70853 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 18:00:29.048046   70853 kubeadm.go:391] StartCluster: {Name:custom-flannel-726705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:custom-flannel-726705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 18:00:29.048145   70853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 18:00:29.048205   70853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 18:00:29.102513   70853 cri.go:89] found id: ""
	I0416 18:00:29.102594   70853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 18:00:29.115566   70853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 18:00:29.126909   70853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 18:00:29.138518   70853 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 18:00:29.138535   70853 kubeadm.go:156] found existing configuration files:
	
	I0416 18:00:29.138568   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 18:00:29.149607   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 18:00:29.149655   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 18:00:29.160449   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 18:00:29.171821   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 18:00:29.171875   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 18:00:29.184397   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 18:00:29.197577   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 18:00:29.197631   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 18:00:29.210455   70853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 18:00:29.220309   70853 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 18:00:29.220370   70853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 18:00:29.233920   70853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 18:00:29.452954   70853 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 18:00:40.930575   70853 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 18:00:40.930652   70853 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 18:00:40.930747   70853 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 18:00:40.930862   70853 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 18:00:40.930967   70853 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 18:00:40.931041   70853 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 18:00:40.932957   70853 out.go:204]   - Generating certificates and keys ...
	I0416 18:00:40.933049   70853 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 18:00:40.933151   70853 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 18:00:40.933264   70853 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 18:00:40.933354   70853 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 18:00:40.933449   70853 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 18:00:40.933530   70853 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 18:00:40.933617   70853 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 18:00:40.933757   70853 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-726705 localhost] and IPs [192.168.72.208 127.0.0.1 ::1]
	I0416 18:00:40.933808   70853 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 18:00:40.933918   70853 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-726705 localhost] and IPs [192.168.72.208 127.0.0.1 ::1]
	I0416 18:00:40.933995   70853 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 18:00:40.934081   70853 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 18:00:40.934121   70853 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 18:00:40.934193   70853 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 18:00:40.934243   70853 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 18:00:40.934317   70853 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 18:00:40.934390   70853 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 18:00:40.934471   70853 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 18:00:40.934563   70853 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 18:00:40.934672   70853 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 18:00:40.934794   70853 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 18:00:40.936706   70853 out.go:204]   - Booting up control plane ...
	I0416 18:00:40.936820   70853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 18:00:40.936937   70853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 18:00:40.937001   70853 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 18:00:40.937094   70853 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 18:00:40.937166   70853 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 18:00:40.937204   70853 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 18:00:40.937334   70853 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 18:00:40.937425   70853 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002767 seconds
	I0416 18:00:40.937518   70853 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 18:00:40.937622   70853 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 18:00:40.937693   70853 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 18:00:40.937927   70853 kubeadm.go:309] [mark-control-plane] Marking the node custom-flannel-726705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 18:00:40.937991   70853 kubeadm.go:309] [bootstrap-token] Using token: 1suvyo.23p3gvrlr33x42m0
	I0416 18:00:40.939765   70853 out.go:204]   - Configuring RBAC rules ...
	I0416 18:00:40.939868   70853 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 18:00:40.939960   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 18:00:40.940096   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 18:00:40.940285   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 18:00:40.940458   70853 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 18:00:40.940572   70853 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 18:00:40.940718   70853 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 18:00:40.940777   70853 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 18:00:40.940868   70853 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 18:00:40.940878   70853 kubeadm.go:309] 
	I0416 18:00:40.940960   70853 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 18:00:40.940971   70853 kubeadm.go:309] 
	I0416 18:00:40.941083   70853 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 18:00:40.941092   70853 kubeadm.go:309] 
	I0416 18:00:40.941125   70853 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 18:00:40.941207   70853 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 18:00:40.941272   70853 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 18:00:40.941282   70853 kubeadm.go:309] 
	I0416 18:00:40.941380   70853 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 18:00:40.941390   70853 kubeadm.go:309] 
	I0416 18:00:40.941447   70853 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 18:00:40.941457   70853 kubeadm.go:309] 
	I0416 18:00:40.941526   70853 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 18:00:40.941623   70853 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 18:00:40.941718   70853 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 18:00:40.941728   70853 kubeadm.go:309] 
	I0416 18:00:40.941816   70853 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 18:00:40.941941   70853 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 18:00:40.941953   70853 kubeadm.go:309] 
	I0416 18:00:40.942080   70853 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1suvyo.23p3gvrlr33x42m0 \
	I0416 18:00:40.942239   70853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 \
	I0416 18:00:40.942282   70853 kubeadm.go:309] 	--control-plane 
	I0416 18:00:40.942292   70853 kubeadm.go:309] 
	I0416 18:00:40.942384   70853 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 18:00:40.942391   70853 kubeadm.go:309] 
	I0416 18:00:40.942461   70853 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1suvyo.23p3gvrlr33x42m0 \
	I0416 18:00:40.942572   70853 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:85e2962b795e5b8eb54c4ef0edb0477357414eab65a5ad5f6177a9a3c66b59c2 
	I0416 18:00:40.942591   70853 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0416 18:00:40.945415   70853 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0416 18:00:40.947027   70853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0416 18:00:40.947087   70853 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0416 18:00:40.958786   70853 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0416 18:00:40.958822   70853 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0416 18:00:41.152889   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0416 18:00:41.678971   70853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 18:00:41.679090   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-726705 minikube.k8s.io/updated_at=2024_04_16T18_00_41_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4 minikube.k8s.io/name=custom-flannel-726705 minikube.k8s.io/primary=true
	I0416 18:00:41.679107   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:41.837109   70853 ops.go:34] apiserver oom_adj: -16
	I0416 18:00:41.837596   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:42.337610   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:42.838288   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:43.337867   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:43.838354   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:44.338620   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:44.837869   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:45.337670   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:45.837761   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:46.338607   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:46.838070   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:47.338644   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:47.837960   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:48.338197   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:48.838492   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:49.337914   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:49.838057   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:50.338264   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:50.837668   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:51.338598   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:51.838284   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:52.337707   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:52.838534   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:53.338572   70853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 18:00:53.529231   70853 kubeadm.go:1107] duration metric: took 11.850195459s to wait for elevateKubeSystemPrivileges
	W0416 18:00:53.529271   70853 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 18:00:53.529281   70853 kubeadm.go:393] duration metric: took 24.481237931s to StartCluster
	I0416 18:00:53.529300   70853 settings.go:142] acquiring lock: {Name:mk5b18c9e8ce43a76fc286d43a0bc732eb03f4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:53.529379   70853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 18:00:53.530285   70853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/kubeconfig: {Name:mkf51c53dc5467f31868793397add9d11ed1a6fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 18:00:53.530569   70853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 18:00:53.530592   70853 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 18:00:53.533110   70853 out.go:177] * Verifying Kubernetes components...
	I0416 18:00:53.530691   70853 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 18:00:53.530789   70853 config.go:182] Loaded profile config "custom-flannel-726705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 18:00:53.533189   70853 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-726705"
	I0416 18:00:53.534591   70853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 18:00:53.534600   70853 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-726705"
	I0416 18:00:53.534629   70853 host.go:66] Checking if "custom-flannel-726705" exists ...
	I0416 18:00:53.533194   70853 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-726705"
	I0416 18:00:53.534724   70853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-726705"
	I0416 18:00:53.534944   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.534970   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.535102   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.535171   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.549750   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0416 18:00:53.550300   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.550930   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.550954   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.550976   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0416 18:00:53.551285   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.551388   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.551825   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.551844   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.551853   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.551878   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.552241   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.552426   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:53.555816   70853 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-726705"
	I0416 18:00:53.555853   70853 host.go:66] Checking if "custom-flannel-726705" exists ...
	I0416 18:00:53.556124   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.556158   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.568226   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0416 18:00:53.568802   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.569317   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.569344   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.569704   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.569951   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:53.571640   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:53.573579   70853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 18:00:53.572221   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
	I0416 18:00:53.573987   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.574907   70853 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 18:00:53.574923   70853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 18:00:53.574942   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:53.575508   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.575532   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.575895   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.576350   70853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 18:00:53.576376   70853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 18:00:53.578417   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.578859   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:53.578875   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.579110   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:53.579290   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:53.579520   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:53.579762   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:53.592985   70853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0416 18:00:53.593435   70853 main.go:141] libmachine: () Calling .GetVersion
	I0416 18:00:53.593913   70853 main.go:141] libmachine: Using API Version  1
	I0416 18:00:53.593926   70853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 18:00:53.594311   70853 main.go:141] libmachine: () Calling .GetMachineName
	I0416 18:00:53.594607   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetState
	I0416 18:00:53.596254   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .DriverName
	I0416 18:00:53.596499   70853 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 18:00:53.596512   70853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 18:00:53.596527   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHHostname
	I0416 18:00:53.599489   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.599943   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:f8:88", ip: ""} in network mk-custom-flannel-726705: {Iface:virbr3 ExpiryTime:2024-04-16 19:00:04 +0000 UTC Type:0 Mac:52:54:00:f8:f8:88 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:custom-flannel-726705 Clientid:01:52:54:00:f8:f8:88}
	I0416 18:00:53.599957   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | domain custom-flannel-726705 has defined IP address 192.168.72.208 and MAC address 52:54:00:f8:f8:88 in network mk-custom-flannel-726705
	I0416 18:00:53.600130   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHPort
	I0416 18:00:53.600310   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHKeyPath
	I0416 18:00:53.600451   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .GetSSHUsername
	I0416 18:00:53.600584   70853 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/custom-flannel-726705/id_rsa Username:docker}
	I0416 18:00:53.869166   70853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 18:00:53.869219   70853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 18:00:53.958197   70853 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-726705" to be "Ready" ...
	I0416 18:00:53.975876   70853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 18:00:54.104310   70853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 18:00:54.558384   70853 start.go:946] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0416 18:00:54.558489   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.558515   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.558822   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.558838   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.558848   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.558858   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.558903   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Closing plugin on server side
	I0416 18:00:54.559119   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.559137   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.573983   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.574007   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.574268   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.574283   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.574304   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Closing plugin on server side
	I0416 18:00:54.896849   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.896879   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.897233   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.897254   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.897267   70853 main.go:141] libmachine: Making call to close driver server
	I0416 18:00:54.897276   70853 main.go:141] libmachine: (custom-flannel-726705) Calling .Close
	I0416 18:00:54.897234   70853 main.go:141] libmachine: (custom-flannel-726705) DBG | Closing plugin on server side
	I0416 18:00:54.897504   70853 main.go:141] libmachine: Successfully made call to close driver server
	I0416 18:00:54.897518   70853 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 18:00:54.899708   70853 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0416 18:00:54.901172   70853 addons.go:505] duration metric: took 1.370487309s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0416 18:00:55.063442   70853 kapi.go:248] "coredns" deployment in "kube-system" namespace and "custom-flannel-726705" context rescaled to 1 replicas
	I0416 18:00:55.962618   70853 node_ready.go:53] node "custom-flannel-726705" has status "Ready":"False"
	I0416 18:00:58.463146   70853 node_ready.go:53] node "custom-flannel-726705" has status "Ready":"False"
	I0416 18:00:58.962576   70853 node_ready.go:49] node "custom-flannel-726705" has status "Ready":"True"
	I0416 18:00:58.962599   70853 node_ready.go:38] duration metric: took 5.004374341s for node "custom-flannel-726705" to be "Ready" ...
	I0416 18:00:58.962608   70853 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:00:58.972868   70853 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-vxmxv" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:00.981506   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:03.484724   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:05.981091   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:08.480657   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:10.980261   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:13.481804   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:15.979720   70853 pod_ready.go:102] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"False"
	I0416 18:01:16.480539   70853 pod_ready.go:92] pod "coredns-76f75df574-vxmxv" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.480566   70853 pod_ready.go:81] duration metric: took 17.507670532s for pod "coredns-76f75df574-vxmxv" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.480578   70853 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.486146   70853 pod_ready.go:92] pod "etcd-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.486166   70853 pod_ready.go:81] duration metric: took 5.580976ms for pod "etcd-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.486177   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.491969   70853 pod_ready.go:92] pod "kube-apiserver-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.491988   70853 pod_ready.go:81] duration metric: took 5.803844ms for pod "kube-apiserver-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.491997   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.497031   70853 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.497048   70853 pod_ready.go:81] duration metric: took 5.0462ms for pod "kube-controller-manager-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.497056   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-drjz7" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.501901   70853 pod_ready.go:92] pod "kube-proxy-drjz7" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.501922   70853 pod_ready.go:81] duration metric: took 4.859685ms for pod "kube-proxy-drjz7" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.501931   70853 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.877454   70853 pod_ready.go:92] pod "kube-scheduler-custom-flannel-726705" in "kube-system" namespace has status "Ready":"True"
	I0416 18:01:16.877478   70853 pod_ready.go:81] duration metric: took 375.53778ms for pod "kube-scheduler-custom-flannel-726705" in "kube-system" namespace to be "Ready" ...
	I0416 18:01:16.877490   70853 pod_ready.go:38] duration metric: took 17.9148606s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 18:01:16.877507   70853 api_server.go:52] waiting for apiserver process to appear ...
	I0416 18:01:16.877560   70853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 18:01:16.896766   70853 api_server.go:72] duration metric: took 23.366138063s to wait for apiserver process to appear ...
	I0416 18:01:16.896789   70853 api_server.go:88] waiting for apiserver healthz status ...
	I0416 18:01:16.896808   70853 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I0416 18:01:16.901964   70853 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I0416 18:01:16.902938   70853 api_server.go:141] control plane version: v1.29.3
	I0416 18:01:16.902960   70853 api_server.go:131] duration metric: took 6.164589ms to wait for apiserver health ...
	I0416 18:01:16.902967   70853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 18:01:17.080912   70853 system_pods.go:59] 7 kube-system pods found
	I0416 18:01:17.080949   70853 system_pods.go:61] "coredns-76f75df574-vxmxv" [e490c26b-3944-42eb-b1df-31f6f943af8d] Running
	I0416 18:01:17.080955   70853 system_pods.go:61] "etcd-custom-flannel-726705" [a1c5ef0f-43a8-4361-ba18-25c9be11932e] Running
	I0416 18:01:17.080958   70853 system_pods.go:61] "kube-apiserver-custom-flannel-726705" [05676c18-da79-44a6-a5bd-0760cb3b9443] Running
	I0416 18:01:17.080961   70853 system_pods.go:61] "kube-controller-manager-custom-flannel-726705" [73db9f1e-a84e-4b86-8d61-6b6635c93bce] Running
	I0416 18:01:17.080964   70853 system_pods.go:61] "kube-proxy-drjz7" [fd2be830-ac99-40b2-9c33-8f58e6bde0af] Running
	I0416 18:01:17.080967   70853 system_pods.go:61] "kube-scheduler-custom-flannel-726705" [931dcb52-df72-4e9b-971f-78dddb76617a] Running
	I0416 18:01:17.080969   70853 system_pods.go:61] "storage-provisioner" [a51f1b5d-557b-4f3c-b76c-909060d453ed] Running
	I0416 18:01:17.080975   70853 system_pods.go:74] duration metric: took 178.002636ms to wait for pod list to return data ...
	I0416 18:01:17.080982   70853 default_sa.go:34] waiting for default service account to be created ...
	I0416 18:01:17.277162   70853 default_sa.go:45] found service account: "default"
	I0416 18:01:17.277186   70853 default_sa.go:55] duration metric: took 196.198426ms for default service account to be created ...
	I0416 18:01:17.277195   70853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 18:01:17.481036   70853 system_pods.go:86] 7 kube-system pods found
	I0416 18:01:17.481062   70853 system_pods.go:89] "coredns-76f75df574-vxmxv" [e490c26b-3944-42eb-b1df-31f6f943af8d] Running
	I0416 18:01:17.481067   70853 system_pods.go:89] "etcd-custom-flannel-726705" [a1c5ef0f-43a8-4361-ba18-25c9be11932e] Running
	I0416 18:01:17.481071   70853 system_pods.go:89] "kube-apiserver-custom-flannel-726705" [05676c18-da79-44a6-a5bd-0760cb3b9443] Running
	I0416 18:01:17.481078   70853 system_pods.go:89] "kube-controller-manager-custom-flannel-726705" [73db9f1e-a84e-4b86-8d61-6b6635c93bce] Running
	I0416 18:01:17.481084   70853 system_pods.go:89] "kube-proxy-drjz7" [fd2be830-ac99-40b2-9c33-8f58e6bde0af] Running
	I0416 18:01:17.481089   70853 system_pods.go:89] "kube-scheduler-custom-flannel-726705" [931dcb52-df72-4e9b-971f-78dddb76617a] Running
	I0416 18:01:17.481095   70853 system_pods.go:89] "storage-provisioner" [a51f1b5d-557b-4f3c-b76c-909060d453ed] Running
	I0416 18:01:17.481104   70853 system_pods.go:126] duration metric: took 203.901975ms to wait for k8s-apps to be running ...
	I0416 18:01:17.481113   70853 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 18:01:17.481174   70853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 18:01:17.500649   70853 system_svc.go:56] duration metric: took 19.524367ms WaitForService to wait for kubelet
	I0416 18:01:17.500689   70853 kubeadm.go:576] duration metric: took 23.97006282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 18:01:17.500724   70853 node_conditions.go:102] verifying NodePressure condition ...
	I0416 18:01:17.677648   70853 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 18:01:17.677684   70853 node_conditions.go:123] node cpu capacity is 2
	I0416 18:01:17.677698   70853 node_conditions.go:105] duration metric: took 176.967127ms to run NodePressure ...
	I0416 18:01:17.677710   70853 start.go:240] waiting for startup goroutines ...
	I0416 18:01:17.677719   70853 start.go:245] waiting for cluster config update ...
	I0416 18:01:17.677731   70853 start.go:254] writing updated cluster config ...
	I0416 18:01:17.678030   70853 ssh_runner.go:195] Run: rm -f paused
	I0416 18:01:17.732046   70853 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 18:01:17.734931   70853 out.go:177] * Done! kubectl is now configured to use "custom-flannel-726705" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.001448339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713291191001421895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eebc49eb-48a5-46d5-beb9-ca8b61dde953 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.002077085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98567621-dd64-4ad3-96cb-5dedcd1e1718 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.002160642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98567621-dd64-4ad3-96cb-5dedcd1e1718 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.002352826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98567621-dd64-4ad3-96cb-5dedcd1e1718 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.052841473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a97c057d-0699-4117-9aba-8f9d3040d319 name=/runtime.v1.RuntimeService/Version
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.052911577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a97c057d-0699-4117-9aba-8f9d3040d319 name=/runtime.v1.RuntimeService/Version
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.054628551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95489b14-ed6d-49e6-94c0-bf93d118cd51 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.055270038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713291191055245843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95489b14-ed6d-49e6-94c0-bf93d118cd51 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.056037756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4495d854-d4ab-411f-ac34-7ed709148dbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.056123926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4495d854-d4ab-411f-ac34-7ed709148dbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.056312331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4495d854-d4ab-411f-ac34-7ed709148dbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.100368682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a681f25-9c02-4ec7-bf8d-7d0ba319fc2f name=/runtime.v1.RuntimeService/Version
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.100587209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a681f25-9c02-4ec7-bf8d-7d0ba319fc2f name=/runtime.v1.RuntimeService/Version
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.102338386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8cf3738-c566-4400-8a91-51558417a650 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.103073385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713291191102928769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8cf3738-c566-4400-8a91-51558417a650 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.104000527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf5ff047-bf91-4171-a499-58fc24fa98d2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.104082655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf5ff047-bf91-4171-a499-58fc24fa98d2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.104309864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf5ff047-bf91-4171-a499-58fc24fa98d2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.146787580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69c79ec2-246d-4ab8-a37a-21aa2335083a name=/runtime.v1.RuntimeService/Version
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.146891766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69c79ec2-246d-4ab8-a37a-21aa2335083a name=/runtime.v1.RuntimeService/Version
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.148900354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa9ab372-e461-427d-aa39-d5234d8e950e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.149304583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713291191149283986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa9ab372-e461-427d-aa39-d5234d8e950e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.150176342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f492e86-2e69-4506-8bab-9f2902a4e0f9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.150254275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f492e86-2e69-4506-8bab-9f2902a4e0f9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 18:13:11 default-k8s-diff-port-304316 crio[721]: time="2024-04-16 18:13:11.150473654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a,PodSandboxId:6319f45ad8d09403fe91dcab6aad0cfae7248a35f2f572ea1205a259fe3113d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290252018855193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v6dwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0ed4b7c-8f8a-4bdf-ba2f-cb372d256094,},Annotations:map[string]string{io.kubernetes.container.hash: 50be172b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621,PodSandboxId:02fa8abeeed314035e0e0d0933d96a7abee7e3d3d4c8700534261893ca183856,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713290251987311453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2td7t,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 01c407e1-20b0-4554-924e-08e9c1a6e71e,},Annotations:map[string]string{io.kubernetes.container.hash: 329918b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf,PodSandboxId:abe7cfb396097c0144d0464fa40bf76055bc85e07ed63ac8850ab5bff0bb6b4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1713290251280344271,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e316ce-7709-4328-b30a-763f622a525c,},Annotations:map[string]string{io.kubernetes.container.hash: b4e05270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65,PodSandboxId:b98107d87af93d76647c36e8e0cfaad7d6327af47661bb823aeefa9186c7bcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713290250739977864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lg46q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3c5c13-25ef-4b45-854d-696e53410d7a,},Annotations:map[string]string{io.kubernetes.container.hash: d9dcddf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b,PodSandboxId:c401011c9d45fa9a17cec50b7239dcfb0fe3c397a71862312ecfbbe83d24dc4b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713290229713561163
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920,PodSandboxId:1aa7c16860ef9dffe23d6533c8b5ceb9a2571ddf507e4db387edbdf0feb5fb8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713290229658935951,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8100a67493dd5fe1046dad3a22563f,},Annotations:map[string]string{io.kubernetes.container.hash: a0f2abe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32,PodSandboxId:f28b5381a35a74c20c189c557c02ebc0a7bcbb6835b600a62ee7e81bbe800536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713290229610397882,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43fd45c46c879e08cdd8dafc82bace36,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292,PodSandboxId:ab645c7d517ed231709f18a70af57f9f6a298b50f077d0c55ba7ef74902766f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713290229548866832,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc5f3cccb0bfef719cbb5135a268fbe,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8,PodSandboxId:ab47a605f6be13b0d6a871341ba0ffa9479776c5ce9a533e9781a74cd5324110,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713289931959295728,Labels:map[s
tring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304316,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36d7c5da89f3815dec2c986d65e6e74c,},Annotations:map[string]string{io.kubernetes.container.hash: ed33355e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f492e86-2e69-4506-8bab-9f2902a4e0f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	966b6aa466077       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   6319f45ad8d09       coredns-76f75df574-v6dwd
	59869f79e95be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   02fa8abeeed31       coredns-76f75df574-2td7t
	3fcdda7db7fdf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   abe7cfb396097       storage-provisioner
	8bd8bce7ea296       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   15 minutes ago      Running             kube-proxy                0                   b98107d87af93       kube-proxy-lg46q
	6ac416d88f5c5       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   16 minutes ago      Running             kube-apiserver            2                   c401011c9d45f       kube-apiserver-default-k8s-diff-port-304316
	b325eb7ba09a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   1aa7c16860ef9       etcd-default-k8s-diff-port-304316
	fcd49b74b51a0       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   16 minutes ago      Running             kube-controller-manager   2                   f28b5381a35a7       kube-controller-manager-default-k8s-diff-port-304316
	4e37c3d94e0ab       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   16 minutes ago      Running             kube-scheduler            2                   ab645c7d517ed       kube-scheduler-default-k8s-diff-port-304316
	21b906fbaf579       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   20 minutes ago      Exited              kube-apiserver            1                   ab47a605f6be1       kube-apiserver-default-k8s-diff-port-304316
	
	
	==> coredns [59869f79e95be62fa942d0e46790caabf2540db631384124e1e3244cbecc0621] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [966b6aa466077db5178e6ecb15c619599bba00bd484c51dcdcbf85d1eba4066a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-304316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-304316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=60e3ec9dd69d5c93d6197c9fdc4147a73975b8a4
	                    minikube.k8s.io/name=default-k8s-diff-port-304316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T17_57_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 17:57:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-304316
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 18:13:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 18:12:52 +0000   Tue, 16 Apr 2024 17:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 18:12:52 +0000   Tue, 16 Apr 2024 17:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 18:12:52 +0000   Tue, 16 Apr 2024 17:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 18:12:52 +0000   Tue, 16 Apr 2024 17:57:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    default-k8s-diff-port-304316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f770da22fc9843ffb7224882cc8739f2
	  System UUID:                f770da22-fc98-43ff-b722-4882cc8739f2
	  Boot ID:                    806d383a-4938-4633-8296-80747352de96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-2td7t                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-76f75df574-v6dwd                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-304316                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-304316             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-304316    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-lg46q                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-304316             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-qv9w5                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node default-k8s-diff-port-304316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node default-k8s-diff-port-304316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node default-k8s-diff-port-304316 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node default-k8s-diff-port-304316 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node default-k8s-diff-port-304316 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node default-k8s-diff-port-304316 event: Registered Node default-k8s-diff-port-304316 in Controller
	
	
	==> dmesg <==
	[  +0.052065] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043065] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.729055] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.610662] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.474801] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr16 17:52] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.060317] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067815] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.167641] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.156969] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.318911] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +5.128503] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.060264] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.158581] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +5.597438] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.671174] kauditd_printk_skb: 84 callbacks suppressed
	[Apr16 17:57] systemd-fstab-generator[3627]: Ignoring "noauto" option for root device
	[  +0.068183] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.769227] systemd-fstab-generator[3952]: Ignoring "noauto" option for root device
	[  +0.080159] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.494755] systemd-fstab-generator[4166]: Ignoring "noauto" option for root device
	[  +0.094740] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.274379] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [b325eb7ba09a605b300022a03167286ae02765c70e4d96dd2fd5a42ae0241920] <==
	{"level":"warn","ts":"2024-04-16T17:57:33.655009Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.217175Z","time spent":"437.821951ms","remote":"127.0.0.1:55574","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4701,"request content":"key:\"/registry/pods/kube-system/coredns-76f75df574-2td7t\" "}
	{"level":"info","ts":"2024-04-16T17:57:33.655347Z","caller":"traceutil/trace.go:171","msg":"trace[220973017] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"449.712345ms","start":"2024-04-16T17:57:33.205617Z","end":"2024-04-16T17:57:33.655329Z","steps":["trace[220973017] 'process raft request'  (duration: 256.634796ms)","trace[220973017] 'compare'  (duration: 191.764441ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:57:33.655472Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.205604Z","time spent":"449.816138ms","remote":"127.0.0.1:55546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":844,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/metrics-server\" mod_revision:385 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/metrics-server\" value_size:781 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/metrics-server\" > >"}
	{"level":"info","ts":"2024-04-16T17:57:33.838289Z","caller":"traceutil/trace.go:171","msg":"trace[340670038] linearizableReadLoop","detail":"{readStateIndex:426; appliedIndex:424; }","duration":"177.868364ms","start":"2024-04-16T17:57:33.660407Z","end":"2024-04-16T17:57:33.838275Z","steps":["trace[340670038] 'read index received'  (duration: 116.822838ms)","trace[340670038] 'applied index is now lower than readState.Index'  (duration: 61.044992ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:57:33.838618Z","caller":"traceutil/trace.go:171","msg":"trace[868543303] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"607.677132ms","start":"2024-04-16T17:57:33.230885Z","end":"2024-04-16T17:57:33.838562Z","steps":["trace[868543303] 'process raft request'  (duration: 546.403005ms)","trace[868543303] 'compare'  (duration: 60.855512ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:57:33.838898Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.230864Z","time spent":"607.964934ms","remote":"127.0.0.1:55574","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4731,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-76f75df574-v6dwd\" mod_revision:344 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-76f75df574-v6dwd\" value_size:4672 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-76f75df574-v6dwd\" > >"}
	{"level":"info","ts":"2024-04-16T17:57:33.839059Z","caller":"traceutil/trace.go:171","msg":"trace[1778091132] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"535.420986ms","start":"2024-04-16T17:57:33.303626Z","end":"2024-04-16T17:57:33.839047Z","steps":["trace[1778091132] 'process raft request'  (duration: 534.604346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T17:57:33.839159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T17:57:33.303608Z","time spent":"535.512167ms","remote":"127.0.0.1:55658","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-vjahiivyujr42mxoz5nm4ho5kq\" mod_revision:276 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-vjahiivyujr42mxoz5nm4ho5kq\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-vjahiivyujr42mxoz5nm4ho5kq\" > >"}
	{"level":"warn","ts":"2024-04-16T17:57:33.838913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.487556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-304316\" ","response":"range_response_count:1 size:5764"}
	{"level":"info","ts":"2024-04-16T17:57:33.839429Z","caller":"traceutil/trace.go:171","msg":"trace[1274234003] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-304316; range_end:; response_count:1; response_revision:415; }","duration":"179.040346ms","start":"2024-04-16T17:57:33.660378Z","end":"2024-04-16T17:57:33.839418Z","steps":["trace[1274234003] 'agreement among raft nodes before linearized reading'  (duration: 178.458545ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:58:54.639509Z","caller":"traceutil/trace.go:171","msg":"trace[296041862] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"120.320916ms","start":"2024-04-16T17:58:54.519156Z","end":"2024-04-16T17:58:54.639477Z","steps":["trace[296041862] 'process raft request'  (duration: 60.017247ms)","trace[296041862] 'compare'  (duration: 60.129945ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T17:58:56.632314Z","caller":"traceutil/trace.go:171","msg":"trace[1425136071] linearizableReadLoop","detail":"{readStateIndex:541; appliedIndex:540; }","duration":"118.624768ms","start":"2024-04-16T17:58:56.513576Z","end":"2024-04-16T17:58:56.632201Z","steps":["trace[1425136071] 'read index received'  (duration: 56.605497ms)","trace[1425136071] 'applied index is now lower than readState.Index'  (duration: 62.017552ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-16T17:58:56.633481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.648991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-04-16T17:58:56.633814Z","caller":"traceutil/trace.go:171","msg":"trace[830659573] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:512; }","duration":"120.282613ms","start":"2024-04-16T17:58:56.513515Z","end":"2024-04-16T17:58:56.633798Z","steps":["trace[830659573] 'agreement among raft nodes before linearized reading'  (duration: 119.280568ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T17:58:56.634803Z","caller":"traceutil/trace.go:171","msg":"trace[1294242776] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"201.718679ms","start":"2024-04-16T17:58:56.43288Z","end":"2024-04-16T17:58:56.634598Z","steps":["trace[1294242776] 'process raft request'  (duration: 137.351715ms)","trace[1294242776] 'compare'  (duration: 60.926204ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-16T18:00:05.372446Z","caller":"traceutil/trace.go:171","msg":"trace[447696603] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"120.36715ms","start":"2024-04-16T18:00:05.252029Z","end":"2024-04-16T18:00:05.372396Z","steps":["trace[447696603] 'process raft request'  (duration: 120.235539ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T18:00:27.658075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.48509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-04-16T18:00:27.658356Z","caller":"traceutil/trace.go:171","msg":"trace[1345281850] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:590; }","duration":"130.999019ms","start":"2024-04-16T18:00:27.527329Z","end":"2024-04-16T18:00:27.658328Z","steps":["trace[1345281850] 'range keys from in-memory index tree'  (duration: 130.343262ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T18:00:27.877392Z","caller":"traceutil/trace.go:171","msg":"trace[576843287] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"213.325397ms","start":"2024-04-16T18:00:27.664045Z","end":"2024-04-16T18:00:27.87737Z","steps":["trace[576843287] 'process raft request'  (duration: 213.167781ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T18:07:10.807154Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":675}
	{"level":"info","ts":"2024-04-16T18:07:10.817141Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":675,"took":"9.487837ms","hash":2455372306,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2183168,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-16T18:07:10.817308Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2455372306,"revision":675,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T18:12:10.814827Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":918}
	{"level":"info","ts":"2024-04-16T18:12:10.819427Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":918,"took":"4.23081ms","hash":3275180982,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-04-16T18:12:10.819486Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3275180982,"revision":918,"compact-revision":675}
	
	
	==> kernel <==
	 18:13:11 up 21 min,  0 users,  load average: 0.16, 0.17, 0.14
	Linux default-k8s-diff-port-304316 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [21b906fbaf57981f8868216ceff13893a051b8fea822e5fd4d8d41260a7f56a8] <==
	W0416 17:56:58.666306       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.708012       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.709471       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.816057       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.883149       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.922122       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.955291       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:58.983025       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.048918       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.076194       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.111405       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.118593       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.275775       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.315572       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.358615       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.381956       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.601800       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.610153       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.615432       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.939986       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:56:59.970863       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:00.036246       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:00.695360       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:05.091184       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 17:57:05.314121       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6ac416d88f5c585352f7dad8e45a3e600624ecb4ae332e834d820138a746281b] <==
	I0416 18:07:13.746557       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:08:13.745773       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:08:13.745884       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 18:08:13.745898       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:08:13.746970       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:08:13.747111       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 18:08:13.747146       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:10:13.746835       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:10:13.747304       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 18:10:13.747350       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:10:13.747444       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:10:13.747717       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 18:10:13.748769       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:12:12.749338       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:12:12.749475       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 18:12:13.750421       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:12:13.750528       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 18:12:13.750556       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 18:12:13.750638       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 18:12:13.750777       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 18:12:13.752022       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fcd49b74b51a0d8ba18e356254a1246522cdc704b6bd708aabd9b8fb15817d32] <==
	I0416 18:07:28.671508       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:07:58.116270       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:07:58.680013       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:08:28.125275       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:08:28.687825       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 18:08:29.413339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="201.882µs"
	I0416 18:08:43.411610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="70.875µs"
	E0416 18:08:58.130375       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:08:58.695305       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:09:28.136616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:09:28.706064       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:09:58.142524       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:09:58.716016       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:10:28.148382       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:10:28.725584       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:10:58.154106       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:10:58.733800       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:11:28.160312       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:11:28.742640       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:11:58.165537       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:11:58.750639       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:12:28.171634       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:12:28.759080       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 18:12:58.177335       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 18:12:58.767153       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8bd8bce7ea2961ef24d7ad28d1e332286d7bab24cce38c3e9ef6672a935d4f65] <==
	I0416 17:57:31.177180       1 server_others.go:72] "Using iptables proxy"
	I0416 17:57:31.194356       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	I0416 17:57:31.298863       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 17:57:31.298890       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 17:57:31.298912       1 server_others.go:168] "Using iptables Proxier"
	I0416 17:57:31.304041       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 17:57:31.304852       1 server.go:865] "Version info" version="v1.29.3"
	I0416 17:57:31.304895       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 17:57:31.308851       1 config.go:188] "Starting service config controller"
	I0416 17:57:31.308906       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 17:57:31.308940       1 config.go:97] "Starting endpoint slice config controller"
	I0416 17:57:31.308971       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 17:57:31.324277       1 config.go:315] "Starting node config controller"
	I0416 17:57:31.324293       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 17:57:31.415836       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 17:57:31.415901       1 shared_informer.go:318] Caches are synced for service config
	I0416 17:57:31.424868       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4e37c3d94e0aba29ea7afcaabd1bb7999d2b0a093f26f76d37272b577dcf4292] <==
	W0416 17:57:13.658783       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.658856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.739486       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 17:57:13.739549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 17:57:13.807163       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 17:57:13.807187       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 17:57:13.816081       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 17:57:13.816215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 17:57:13.835048       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 17:57:13.835126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 17:57:13.881159       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.882723       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.942040       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.942280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.983933       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 17:57:13.984269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 17:57:13.984223       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 17:57:13.984371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 17:57:14.033834       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 17:57:14.033888       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 17:57:14.129069       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 17:57:14.129339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 17:57:14.136176       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 17:57:14.136275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0416 17:57:16.004796       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 18:10:16 default-k8s-diff-port-304316 kubelet[3959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:10:21 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:10:21.395090    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:10:36 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:10:36.396919    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:10:49 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:10:49.395260    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:11:03 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:11:03.395554    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:11:14 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:11:14.395179    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:11:16 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:11:16.465070    3959 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:11:16 default-k8s-diff-port-304316 kubelet[3959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:11:16 default-k8s-diff-port-304316 kubelet[3959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:11:16 default-k8s-diff-port-304316 kubelet[3959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:11:16 default-k8s-diff-port-304316 kubelet[3959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:11:25 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:11:25.395338    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:11:39 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:11:39.394863    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:11:51 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:11:51.395232    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:12:02 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:12:02.394632    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:12:14 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:12:14.395920    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:12:16 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:12:16.461225    3959 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 18:12:16 default-k8s-diff-port-304316 kubelet[3959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 18:12:16 default-k8s-diff-port-304316 kubelet[3959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 18:12:16 default-k8s-diff-port-304316 kubelet[3959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 18:12:16 default-k8s-diff-port-304316 kubelet[3959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 18:12:27 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:12:27.395085    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:12:41 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:12:41.394982    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:12:54 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:12:54.395411    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	Apr 16 18:13:05 default-k8s-diff-port-304316 kubelet[3959]: E0416 18:13:05.394900    3959 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qv9w5" podUID="07c1a75f-66de-4672-90ef-a5d837dc6632"
	
	
	==> storage-provisioner [3fcdda7db7fdff890f05c47386ac684fa9aa0bff7f18f708b3a1ea8dfdb63edf] <==
	I0416 17:57:31.608200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 17:57:31.633204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 17:57:31.633483       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 17:57:31.683892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 17:57:31.684058       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-304316_e74b58d7-e061-46b4-bbc0-a983d5d046af!
	I0416 17:57:31.699046       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9622f49e-56fe-44d4-a543-bcc5bd14e470", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-304316_e74b58d7-e061-46b4-bbc0-a983d5d046af became leader
	I0416 17:57:31.804955       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-304316_e74b58d7-e061-46b4-bbc0-a983d5d046af!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qv9w5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 describe pod metrics-server-57f55c9bc5-qv9w5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304316 describe pod metrics-server-57f55c9bc5-qv9w5: exit status 1 (62.153965ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qv9w5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-304316 describe pod metrics-server-57f55c9bc5-qv9w5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (393.79s)

                                                
                                    

Test pass (250/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.37
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.29.3/json-events 4.25
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.14
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.30.0-rc.2/json-events 4.04
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.55
31 TestOffline 98.88
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 138.62
38 TestAddons/parallel/Registry 14.49
40 TestAddons/parallel/InspektorGadget 11.15
41 TestAddons/parallel/MetricsServer 6.79
42 TestAddons/parallel/HelmTiller 11.92
44 TestAddons/parallel/CSI 53.78
45 TestAddons/parallel/Headlamp 13.22
46 TestAddons/parallel/CloudSpanner 5.62
47 TestAddons/parallel/LocalPath 51.95
48 TestAddons/parallel/NvidiaDevicePlugin 5.69
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestCertOptions 63.05
55 TestCertExpiration 298.9
57 TestForceSystemdFlag 49.07
58 TestForceSystemdEnv 78.93
60 TestKVMDriverInstallOrUpdate 1.19
64 TestErrorSpam/setup 44.07
65 TestErrorSpam/start 0.37
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.7
68 TestErrorSpam/unpause 1.73
69 TestErrorSpam/stop 5.66
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 96.95
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 40.28
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
81 TestFunctional/serial/CacheCmd/cache/add_local 1.13
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 35.88
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.61
92 TestFunctional/serial/LogsFileCmd 1.57
93 TestFunctional/serial/InvalidService 4.55
95 TestFunctional/parallel/ConfigCmd 0.39
96 TestFunctional/parallel/DashboardCmd 13.62
97 TestFunctional/parallel/DryRun 0.32
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 0.92
103 TestFunctional/parallel/ServiceCmdConnect 13.69
104 TestFunctional/parallel/AddonsCmd 0.22
105 TestFunctional/parallel/PersistentVolumeClaim 42.22
107 TestFunctional/parallel/SSHCmd 0.47
108 TestFunctional/parallel/CpCmd 1.44
109 TestFunctional/parallel/MySQL 29.54
110 TestFunctional/parallel/FileSync 0.24
111 TestFunctional/parallel/CertSync 1.44
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
119 TestFunctional/parallel/License 0.18
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
124 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
125 TestFunctional/parallel/ImageCommands/Setup 1.02
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.68
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.49
141 TestFunctional/parallel/ServiceCmd/DeployApp 14.29
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.05
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.7
145 TestFunctional/parallel/ServiceCmd/List 0.52
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
148 TestFunctional/parallel/ServiceCmd/Format 0.34
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.37
150 TestFunctional/parallel/ServiceCmd/URL 0.45
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
152 TestFunctional/parallel/MountCmd/any-port 6.86
153 TestFunctional/parallel/ProfileCmd/profile_list 0.36
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
155 TestFunctional/parallel/Version/short 0.06
156 TestFunctional/parallel/Version/components 0.82
157 TestFunctional/parallel/MountCmd/specific-port 1.88
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 203.3
166 TestMultiControlPlane/serial/DeployApp 5.23
167 TestMultiControlPlane/serial/PingHostFromPods 1.38
168 TestMultiControlPlane/serial/AddWorkerNode 47.31
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
171 TestMultiControlPlane/serial/CopyFile 13.69
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.7
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
180 TestMultiControlPlane/serial/RestartCluster 377.06
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.42
182 TestMultiControlPlane/serial/AddSecondaryNode 73.9
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.58
187 TestJSONOutput/start/Command 60.3
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.74
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.67
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.44
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 95.32
219 TestMountStart/serial/StartWithMountFirst 28.01
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 27.56
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.87
224 TestMountStart/serial/VerifyMountPostDelete 0.4
225 TestMountStart/serial/Stop 1.41
226 TestMountStart/serial/RestartStopped 23.14
227 TestMountStart/serial/VerifyMountPostStop 0.39
230 TestMultiNode/serial/FreshStart2Nodes 103.14
231 TestMultiNode/serial/DeployApp2Nodes 4.28
232 TestMultiNode/serial/PingHostFrom2Pods 0.89
233 TestMultiNode/serial/AddNode 41.49
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.66
237 TestMultiNode/serial/StopNode 3.18
238 TestMultiNode/serial/StartAfterStop 28.65
240 TestMultiNode/serial/DeleteNode 2.41
242 TestMultiNode/serial/RestartMultiNode 165.07
243 TestMultiNode/serial/ValidateNameConflict 49.17
250 TestScheduledStopUnix 118.16
254 TestRunningBinaryUpgrade 226.94
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
269 TestNoKubernetes/serial/StartWithK8s 99.45
273 TestNoKubernetes/serial/StartWithStopK8s 9.09
278 TestNetworkPlugins/group/false 3.29
282 TestNoKubernetes/serial/Start 30.76
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
284 TestNoKubernetes/serial/ProfileList 1.17
285 TestNoKubernetes/serial/Stop 1.41
286 TestNoKubernetes/serial/StartNoArgs 48.34
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
289 TestStartStop/group/no-preload/serial/FirstStart 113.38
291 TestStartStop/group/embed-certs/serial/FirstStart 63.96
294 TestStartStop/group/no-preload/serial/DeployApp 9.31
295 TestStartStop/group/embed-certs/serial/DeployApp 8.32
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
300 TestStartStop/group/old-k8s-version/serial/Stop 5.29
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/no-preload/serial/SecondStart 629.06
306 TestStartStop/group/embed-certs/serial/SecondStart 621.78
308 TestStoppedBinaryUpgrade/Setup 0.54
309 TestStoppedBinaryUpgrade/Upgrade 101.64
310 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
312 TestPause/serial/Start 98.32
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.78
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.29
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 639.52
327 TestStartStop/group/newest-cni/serial/FirstStart 61.36
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
330 TestStartStop/group/newest-cni/serial/Stop 7.35
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
332 TestStartStop/group/newest-cni/serial/SecondStart 43.09
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
336 TestStartStop/group/newest-cni/serial/Pause 2.76
337 TestNetworkPlugins/group/auto/Start 61.22
338 TestNetworkPlugins/group/auto/KubeletFlags 0.22
339 TestNetworkPlugins/group/auto/NetCatPod 10.22
340 TestNetworkPlugins/group/auto/DNS 0.17
341 TestNetworkPlugins/group/auto/Localhost 0.14
342 TestNetworkPlugins/group/auto/HairPin 0.15
343 TestNetworkPlugins/group/flannel/Start 82.13
344 TestNetworkPlugins/group/enable-default-cni/Start 65.22
346 TestNetworkPlugins/group/flannel/ControllerPod 6.01
347 TestNetworkPlugins/group/bridge/Start 61.2
348 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
349 TestNetworkPlugins/group/flannel/NetCatPod 12.26
350 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
351 TestNetworkPlugins/group/flannel/DNS 0.21
352 TestNetworkPlugins/group/flannel/Localhost 0.19
353 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
354 TestNetworkPlugins/group/flannel/HairPin 0.17
355 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
356 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
357 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
358 TestNetworkPlugins/group/calico/Start 92.92
359 TestNetworkPlugins/group/kindnet/Start 83.59
360 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
361 TestNetworkPlugins/group/bridge/NetCatPod 13.26
362 TestNetworkPlugins/group/bridge/DNS 26.04
363 TestNetworkPlugins/group/bridge/Localhost 0.2
364 TestNetworkPlugins/group/bridge/HairPin 0.17
365 TestNetworkPlugins/group/custom-flannel/Start 91.07
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.24
369 TestNetworkPlugins/group/calico/NetCatPod 13.26
370 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
371 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
372 TestNetworkPlugins/group/kindnet/DNS 0.16
373 TestNetworkPlugins/group/kindnet/Localhost 0.14
374 TestNetworkPlugins/group/kindnet/HairPin 0.13
375 TestNetworkPlugins/group/calico/DNS 0.18
376 TestNetworkPlugins/group/calico/Localhost 0.15
377 TestNetworkPlugins/group/calico/HairPin 0.15
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
380 TestNetworkPlugins/group/custom-flannel/DNS 0.16
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (10.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-080115 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-080115 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.371852463s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-080115
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-080115: exit status 85 (73.850306ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-080115 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |          |
	|         | -p download-only-080115        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:24.699073   10922 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:24.699187   10922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:24.699192   10922 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:24.699196   10922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:24.699403   10922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	W0416 16:19:24.699523   10922 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18649-3628/.minikube/config/config.json: open /home/jenkins/minikube-integration/18649-3628/.minikube/config/config.json: no such file or directory
	I0416 16:19:24.700031   10922 out.go:298] Setting JSON to true
	I0416 16:19:24.700830   10922 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":117,"bootTime":1713284248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:24.700927   10922 start.go:139] virtualization: kvm guest
	I0416 16:19:24.703566   10922 out.go:97] [download-only-080115] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:24.705224   10922 out.go:169] MINIKUBE_LOCATION=18649
	W0416 16:19:24.703681   10922 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball: no such file or directory
	I0416 16:19:24.703733   10922 notify.go:220] Checking for updates...
	I0416 16:19:24.708335   10922 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:24.709764   10922 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:19:24.711102   10922 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:19:24.712404   10922 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0416 16:19:24.715085   10922 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0416 16:19:24.715315   10922 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:19:24.814966   10922 out.go:97] Using the kvm2 driver based on user configuration
	I0416 16:19:24.814993   10922 start.go:297] selected driver: kvm2
	I0416 16:19:24.814998   10922 start.go:901] validating driver "kvm2" against <nil>
	I0416 16:19:24.815350   10922 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:24.815480   10922 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18649-3628/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 16:19:24.829437   10922 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 16:19:24.829487   10922 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 16:19:24.829971   10922 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0416 16:19:24.830130   10922 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 16:19:24.830188   10922 cni.go:84] Creating CNI manager for ""
	I0416 16:19:24.830201   10922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 16:19:24.830210   10922 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 16:19:24.830249   10922 start.go:340] cluster config:
	{Name:download-only-080115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-080115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:19:24.830405   10922 iso.go:125] acquiring lock: {Name:mk3888719222dce6df586b814751e2a4f8b5072d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 16:19:24.832069   10922 out.go:97] Downloading VM boot image ...
	I0416 16:19:24.832098   10922 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18649-3628/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0416 16:19:28.727016   10922 out.go:97] Starting "download-only-080115" primary control-plane node in "download-only-080115" cluster
	I0416 16:19:28.727057   10922 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 16:19:28.746848   10922 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 16:19:28.746879   10922 cache.go:56] Caching tarball of preloaded images
	I0416 16:19:28.747032   10922 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 16:19:28.748642   10922 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0416 16:19:28.748671   10922 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0416 16:19:28.778582   10922 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 16:19:33.607965   10922 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0416 16:19:33.608056   10922 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18649-3628/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0416 16:19:34.503557   10922 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 16:19:34.503894   10922 profile.go:143] Saving config to /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/download-only-080115/config.json ...
	I0416 16:19:34.503922   10922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/download-only-080115/config.json: {Name:mkbb4545a8f5135deccd2b212c75c5c576f4c702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 16:19:34.504077   10922 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 16:19:34.504247   10922 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18649-3628/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-080115 host does not exist
	  To start a cluster, run: "minikube start -p download-only-080115"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-080115
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (4.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-794654 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-794654 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.248526787s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (4.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-794654
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-794654: exit status 85 (67.7963ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-080115 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-080115        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-080115        | download-only-080115 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | -o=json --download-only        | download-only-794654 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-794654        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:35.390652   11107 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:35.390761   11107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:35.390774   11107 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:35.390779   11107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:35.390975   11107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:19:35.391551   11107 out.go:298] Setting JSON to true
	I0416 16:19:35.392380   11107 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":127,"bootTime":1713284248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:35.392434   11107 start.go:139] virtualization: kvm guest
	I0416 16:19:35.394641   11107 out.go:97] [download-only-794654] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:35.396210   11107 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:19:35.394778   11107 notify.go:220] Checking for updates...
	I0416 16:19:35.399227   11107 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:35.400735   11107 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:19:35.402154   11107 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:19:35.403564   11107 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-794654 host does not exist
	  To start a cluster, run: "minikube start -p download-only-794654"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-794654
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (4.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-348353 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-348353 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.036755384s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (4.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-348353
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-348353: exit status 85 (69.001229ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-080115 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-080115           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-080115           | download-only-080115 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | -o=json --download-only           | download-only-794654 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-794654           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| delete  | -p download-only-794654           | download-only-794654 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC | 16 Apr 24 16:19 UTC |
	| start   | -o=json --download-only           | download-only-348353 | jenkins | v1.33.0-beta.0 | 16 Apr 24 16:19 UTC |                     |
	|         | -p download-only-348353           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 16:19:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 16:19:39.963143   11259 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:19:39.963279   11259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:39.963289   11259 out.go:304] Setting ErrFile to fd 2...
	I0416 16:19:39.963296   11259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:19:39.963482   11259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:19:39.964039   11259 out.go:298] Setting JSON to true
	I0416 16:19:39.964814   11259 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":132,"bootTime":1713284248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:19:39.964898   11259 start.go:139] virtualization: kvm guest
	I0416 16:19:39.966892   11259 out.go:97] [download-only-348353] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:19:39.968260   11259 out.go:169] MINIKUBE_LOCATION=18649
	I0416 16:19:39.967062   11259 notify.go:220] Checking for updates...
	I0416 16:19:39.970864   11259 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:19:39.972439   11259 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:19:39.973774   11259 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:19:39.975064   11259 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-348353 host does not exist
	  To start a cluster, run: "minikube start -p download-only-348353"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-348353
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-249934 --alsologtostderr --binary-mirror http://127.0.0.1:41139 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-249934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-249934
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (98.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-476002 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-476002 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.833181859s)
helpers_test.go:175: Cleaning up "offline-crio-476002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-476002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-476002: (1.044926974s)
--- PASS: TestOffline (98.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-320546
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-320546: exit status 85 (61.11705ms)

                                                
                                                
-- stdout --
	* Profile "addons-320546" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-320546"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-320546
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-320546: exit status 85 (60.930284ms)

                                                
                                                
-- stdout --
	* Profile "addons-320546" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-320546"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (138.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-320546 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-320546 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m18.615249665s)
--- PASS: TestAddons/Setup (138.62s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 27.066029ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rl7f5" [1c0770e4-b4b2-4e20-b112-f4222e84b5a3] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009562829s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xkgds" [232b8056-a4f1-4480-be41-acd884e1691e] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008297904s
addons_test.go:340: (dbg) Run:  kubectl --context addons-320546 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-320546 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-320546 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.216929963s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-320546 addons disable registry --alsologtostderr -v=1: (1.059831698s)
--- PASS: TestAddons/parallel/Registry (14.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.15s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6g49q" [e461fb35-0b3d-4de3-9fa9-1b0efe2a7532] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00487045s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-320546
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-320546: (6.144275506s)
--- PASS: TestAddons/parallel/InspektorGadget (11.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.156747ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-9ncdk" [ba7c9057-48bc-4693-ada5-ae248b38140a] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005265399s
addons_test.go:415: (dbg) Run:  kubectl --context addons-320546 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.532528ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-82r9t" [94a681d5-055b-449c-941b-808c08e30de3] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005445423s
addons_test.go:473: (dbg) Run:  kubectl --context addons-320546 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-320546 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.993536781s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 27.705352ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-320546 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-320546 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9b591a67-8305-498e-ae68-7049f2c3bd99] Pending
2024/04/16 16:22:17 [DEBUG] GET http://192.168.39.101:5000
helpers_test.go:344: "task-pv-pod" [9b591a67-8305-498e-ae68-7049f2c3bd99] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9b591a67-8305-498e-ae68-7049f2c3bd99] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004038562s
addons_test.go:584: (dbg) Run:  kubectl --context addons-320546 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-320546 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-320546 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-320546 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-320546 delete pod task-pv-pod: (1.252166566s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-320546 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-320546 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-320546 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4c11f793-118f-4815-94fc-a9528978ab6c] Pending
helpers_test.go:344: "task-pv-pod-restore" [4c11f793-118f-4815-94fc-a9528978ab6c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4c11f793-118f-4815-94fc-a9528978ab6c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.005365054s
addons_test.go:626: (dbg) Run:  kubectl --context addons-320546 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-320546 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-320546 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-320546 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80255353s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-320546 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-320546 --alsologtostderr -v=1: (1.206873815s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-4jpnf" [1a3eee24-c904-4664-822f-114064b24f70] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-4jpnf" [1a3eee24-c904-4664-822f-114064b24f70] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-4jpnf" [1a3eee24-c904-4664-822f-114064b24f70] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.010260321s
--- PASS: TestAddons/parallel/Headlamp (13.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-5f4v6" [69b9c18c-4784-40b4-b02f-4060b6e48779] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007771004s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-320546
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-320546 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-320546 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-320546 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ece2ffd6-4ebc-4306-b2a3-7b698e76e714] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ece2ffd6-4ebc-4306-b2a3-7b698e76e714] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ece2ffd6-4ebc-4306-b2a3-7b698e76e714] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004769065s
addons_test.go:891: (dbg) Run:  kubectl --context addons-320546 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 ssh "cat /opt/local-path-provisioner/pvc-af6913b5-62da-4d3c-913d-34caa313684f_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-320546 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-320546 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-320546 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-320546 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.082640213s)
--- PASS: TestAddons/parallel/LocalPath (51.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h7wqn" [7c8cc092-7db5-49a3-88fa-480f2ecee1b3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005030908s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-320546
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-w94qg" [208a058d-7e12-4509-b543-5e14c69bcf86] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00450881s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-320546 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-320546 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (63.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-303502 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-303502 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m1.601152658s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-303502 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-303502 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-303502 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-303502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-303502
--- PASS: TestCertOptions (63.05s)

                                                
                                    
x
+
TestCertExpiration (298.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-235607 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-235607 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m8.44904716s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-235607 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-235607 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (49.471806604s)
helpers_test.go:175: Cleaning up "cert-expiration-235607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-235607
--- PASS: TestCertExpiration (298.90s)

                                                
                                    
x
+
TestForceSystemdFlag (49.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-611006 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-611006 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.878639674s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-611006 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
E0416 17:26:53.077732   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "force-systemd-flag-611006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-611006
--- PASS: TestForceSystemdFlag (49.07s)

                                                
                                    
x
+
TestForceSystemdEnv (78.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-625347 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-625347 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.10566872s)
helpers_test.go:175: Cleaning up "force-systemd-env-625347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-625347
--- PASS: TestForceSystemdEnv (78.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.19s)

                                                
                                    
x
+
TestErrorSpam/setup (44.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-417107 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-417107 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-417107 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-417107 --driver=kvm2  --container-runtime=crio: (44.066587108s)
--- PASS: TestErrorSpam/setup (44.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (5.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 stop: (2.295430884s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 stop: (1.959557857s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-417107 --log_dir /tmp/nospam-417107 stop: (1.405925838s)
--- PASS: TestErrorSpam/stop (5.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18649-3628/.minikube/files/etc/test/nested/copy/10910/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-711095 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-711095 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m36.946603627s)
--- PASS: TestFunctional/serial/StartWithProxy (96.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-711095 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-711095 --alsologtostderr -v=8: (40.281375191s)
functional_test.go:659: soft start took 40.282033758s for "functional-711095" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-711095 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 cache add registry.k8s.io/pause:3.1: (1.097263726s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 cache add registry.k8s.io/pause:3.3: (1.181663101s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 cache add registry.k8s.io/pause:latest: (1.084661949s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-711095 /tmp/TestFunctionalserialCacheCmdcacheadd_local2641678837/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cache add minikube-local-cache-test:functional-711095
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cache delete minikube-local-cache-test:functional-711095
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-711095
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (232.280237ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 kubectl -- --context functional-711095 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-711095 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-711095 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-711095 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.876747134s)
functional_test.go:757: restart took 35.876868755s for "functional-711095" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-711095 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 logs: (1.60659014s)
--- PASS: TestFunctional/serial/LogsCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 logs --file /tmp/TestFunctionalserialLogsFileCmd3004062862/001/logs.txt
E0416 16:32:03.889477   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:03.895513   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:03.905764   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:03.926007   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:03.966291   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:04.046604   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:04.207034   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:04.527665   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 logs --file /tmp/TestFunctionalserialLogsFileCmd3004062862/001/logs.txt: (1.566580362s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-711095 apply -f testdata/invalidsvc.yaml
E0416 16:32:05.168634   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:32:06.448986   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-711095
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-711095: exit status 115 (309.839121ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.157:31699 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-711095 delete -f testdata/invalidsvc.yaml
E0416 16:32:09.009344   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
functional_test.go:2323: (dbg) Done: kubectl --context functional-711095 delete -f testdata/invalidsvc.yaml: (1.054402552s)
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 config get cpus: exit status 14 (65.968393ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 config get cpus: exit status 14 (64.956122ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-711095 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-711095 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20207: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-711095 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-711095 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (165.08131ms)

                                                
                                                
-- stdout --
	* [functional-711095] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:32:42.593720   19786 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:32:42.593902   19786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:42.593924   19786 out.go:304] Setting ErrFile to fd 2...
	I0416 16:32:42.593939   19786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:42.594635   19786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:32:42.595421   19786 out.go:298] Setting JSON to false
	I0416 16:32:42.596603   19786 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":915,"bootTime":1713284248,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:32:42.596704   19786 start.go:139] virtualization: kvm guest
	I0416 16:32:42.598968   19786 out.go:177] * [functional-711095] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 16:32:42.600788   19786 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:32:42.600790   19786 notify.go:220] Checking for updates...
	I0416 16:32:42.602396   19786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:32:42.603910   19786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:32:42.605132   19786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:42.606539   19786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:32:42.607846   19786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:32:42.609447   19786 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:32:42.609916   19786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:32:42.609977   19786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:32:42.628330   19786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0416 16:32:42.628774   19786 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:32:42.629277   19786 main.go:141] libmachine: Using API Version  1
	I0416 16:32:42.629302   19786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:32:42.629629   19786 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:32:42.629810   19786 main.go:141] libmachine: (functional-711095) Calling .DriverName
	I0416 16:32:42.630043   19786 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:32:42.630297   19786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:32:42.630340   19786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:32:42.646284   19786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0416 16:32:42.646678   19786 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:32:42.647151   19786 main.go:141] libmachine: Using API Version  1
	I0416 16:32:42.647171   19786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:32:42.647513   19786 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:32:42.647714   19786 main.go:141] libmachine: (functional-711095) Calling .DriverName
	I0416 16:32:42.683085   19786 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 16:32:42.684588   19786 start.go:297] selected driver: kvm2
	I0416 16:32:42.684600   19786 start.go:901] validating driver "kvm2" against &{Name:functional-711095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-711095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:32:42.684736   19786 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:32:42.687378   19786 out.go:177] 
	W0416 16:32:42.688815   19786 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0416 16:32:42.690254   19786 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-711095 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-711095 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-711095 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (165.055272ms)

                                                
                                                
-- stdout --
	* [functional-711095] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 16:32:42.421881   19736 out.go:291] Setting OutFile to fd 1 ...
	I0416 16:32:42.422017   19736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:42.422027   19736 out.go:304] Setting ErrFile to fd 2...
	I0416 16:32:42.422034   19736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 16:32:42.422319   19736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 16:32:42.422837   19736 out.go:298] Setting JSON to false
	I0416 16:32:42.423700   19736 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":914,"bootTime":1713284248,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 16:32:42.423762   19736 start.go:139] virtualization: kvm guest
	I0416 16:32:42.426217   19736 out.go:177] * [functional-711095] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0416 16:32:42.427679   19736 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 16:32:42.429196   19736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 16:32:42.427677   19736 notify.go:220] Checking for updates...
	I0416 16:32:42.430823   19736 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 16:32:42.432162   19736 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 16:32:42.433521   19736 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 16:32:42.435138   19736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 16:32:42.436960   19736 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 16:32:42.437404   19736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:32:42.437453   19736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:32:42.452251   19736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0416 16:32:42.452708   19736 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:32:42.453301   19736 main.go:141] libmachine: Using API Version  1
	I0416 16:32:42.453326   19736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:32:42.453631   19736 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:32:42.453828   19736 main.go:141] libmachine: (functional-711095) Calling .DriverName
	I0416 16:32:42.454063   19736 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 16:32:42.454385   19736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 16:32:42.454419   19736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 16:32:42.472591   19736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0416 16:32:42.473074   19736 main.go:141] libmachine: () Calling .GetVersion
	I0416 16:32:42.473541   19736 main.go:141] libmachine: Using API Version  1
	I0416 16:32:42.473572   19736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 16:32:42.473985   19736 main.go:141] libmachine: () Calling .GetMachineName
	I0416 16:32:42.474185   19736 main.go:141] libmachine: (functional-711095) Calling .DriverName
	I0416 16:32:42.517498   19736 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0416 16:32:42.518935   19736 start.go:297] selected driver: kvm2
	I0416 16:32:42.518952   19736 start.go:901] validating driver "kvm2" against &{Name:functional-711095 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-711095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 16:32:42.519088   19736 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 16:32:42.521357   19736 out.go:177] 
	W0416 16:32:42.522730   19736 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0416 16:32:42.524095   19736 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-711095 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-711095 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-rwhjx" [9b7e828e-f90a-4f13-a1cd-1c1070822702] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-rwhjx" [9b7e828e-f90a-4f13-a1cd-1c1070822702] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.007467616s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.157:30965
functional_test.go:1671: http://192.168.39.157:30965: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-rwhjx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.157:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.157:30965
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [04425e5f-be59-4374-aba6-032958e3e674] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004173629s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-711095 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-711095 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-711095 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-711095 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-711095 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d747ab41-2ba2-4a9e-a95c-73aeb3650be6] Pending
helpers_test.go:344: "sp-pod" [d747ab41-2ba2-4a9e-a95c-73aeb3650be6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d747ab41-2ba2-4a9e-a95c-73aeb3650be6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004395368s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-711095 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-711095 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-711095 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06bd1776-3587-4f52-a098-b3a21296b8df] Pending
helpers_test.go:344: "sp-pod" [06bd1776-3587-4f52-a098-b3a21296b8df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06bd1776-3587-4f52-a098-b3a21296b8df] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004853535s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-711095 exec sp-pod -- ls /tmp/mount
2024/04/16 16:32:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh -n functional-711095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cp functional-711095:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2894683874/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh -n functional-711095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh -n functional-711095 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-711095 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-fqqvk" [55ba91c0-6d6e-4d71-96f3-753c68339096] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0416 16:32:14.129823   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
helpers_test.go:344: "mysql-859648c796-fqqvk" [55ba91c0-6d6e-4d71-96f3-753c68339096] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004917115s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-711095 exec mysql-859648c796-fqqvk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-711095 exec mysql-859648c796-fqqvk -- mysql -ppassword -e "show databases;": exit status 1 (194.89688ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-711095 exec mysql-859648c796-fqqvk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-711095 exec mysql-859648c796-fqqvk -- mysql -ppassword -e "show databases;": exit status 1 (199.271866ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-711095 exec mysql-859648c796-fqqvk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10910/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo cat /etc/test/nested/copy/10910/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10910.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo cat /etc/ssl/certs/10910.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10910.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo cat /usr/share/ca-certificates/10910.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/109102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo cat /etc/ssl/certs/109102.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/109102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo cat /usr/share/ca-certificates/109102.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-711095 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh "sudo systemctl is-active docker": exit status 1 (246.651652ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh "sudo systemctl is-active containerd": exit status 1 (247.937782ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-711095 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-711095
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-711095
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-711095 image ls --format short --alsologtostderr:
I0416 16:32:43.659528   20074 out.go:291] Setting OutFile to fd 1 ...
I0416 16:32:43.659679   20074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:43.659685   20074 out.go:304] Setting ErrFile to fd 2...
I0416 16:32:43.659691   20074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:43.659866   20074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
I0416 16:32:43.660395   20074 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:43.660484   20074 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:43.660827   20074 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:43.660901   20074 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:43.675685   20074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
I0416 16:32:43.676189   20074 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:43.676874   20074 main.go:141] libmachine: Using API Version  1
I0416 16:32:43.676900   20074 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:43.677232   20074 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:43.677401   20074 main.go:141] libmachine: (functional-711095) Calling .GetState
I0416 16:32:43.679163   20074 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:43.679217   20074 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:43.693829   20074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
I0416 16:32:43.694251   20074 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:43.694709   20074 main.go:141] libmachine: Using API Version  1
I0416 16:32:43.694729   20074 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:43.695116   20074 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:43.695326   20074 main.go:141] libmachine: (functional-711095) Calling .DriverName
I0416 16:32:43.695521   20074 ssh_runner.go:195] Run: systemctl --version
I0416 16:32:43.695562   20074 main.go:141] libmachine: (functional-711095) Calling .GetSSHHostname
I0416 16:32:43.698622   20074 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:43.699131   20074 main.go:141] libmachine: (functional-711095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:f8", ip: ""} in network mk-functional-711095: {Iface:virbr1 ExpiryTime:2024-04-16 17:29:17 +0000 UTC Type:0 Mac:52:54:00:de:25:f8 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-711095 Clientid:01:52:54:00:de:25:f8}
I0416 16:32:43.699156   20074 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined IP address 192.168.39.157 and MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:43.699309   20074 main.go:141] libmachine: (functional-711095) Calling .GetSSHPort
I0416 16:32:43.699489   20074 main.go:141] libmachine: (functional-711095) Calling .GetSSHKeyPath
I0416 16:32:43.699652   20074 main.go:141] libmachine: (functional-711095) Calling .GetSSHUsername
I0416 16:32:43.699812   20074 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/functional-711095/id_rsa Username:docker}
I0416 16:32:43.844452   20074 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:32:43.923372   20074 main.go:141] libmachine: Making call to close driver server
I0416 16:32:43.923441   20074 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:43.923781   20074 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:43.923816   20074 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:32:43.923846   20074 main.go:141] libmachine: Making call to close driver server
I0416 16:32:43.923854   20074 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:43.923784   20074 main.go:141] libmachine: (functional-711095) DBG | Closing plugin on server side
I0416 16:32:43.924226   20074 main.go:141] libmachine: (functional-711095) DBG | Closing plugin on server side
I0416 16:32:43.924237   20074 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:43.924279   20074 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-711095 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | c613f16b66424 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| localhost/minikube-local-cache-test     | functional-711095  | ee65aefea8336 | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-711095  | adbc95d620acf | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-711095  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-711095 image ls --format table --alsologtostderr:
I0416 16:32:48.693042   20432 out.go:291] Setting OutFile to fd 1 ...
I0416 16:32:48.693152   20432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:48.693162   20432 out.go:304] Setting ErrFile to fd 2...
I0416 16:32:48.693166   20432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:48.693341   20432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
I0416 16:32:48.693844   20432 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:48.693941   20432 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:48.694311   20432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:48.694361   20432 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:48.709026   20432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
I0416 16:32:48.709450   20432 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:48.710040   20432 main.go:141] libmachine: Using API Version  1
I0416 16:32:48.710066   20432 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:48.710401   20432 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:48.710612   20432 main.go:141] libmachine: (functional-711095) Calling .GetState
I0416 16:32:48.712233   20432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:48.712266   20432 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:48.726182   20432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
I0416 16:32:48.726580   20432 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:48.726974   20432 main.go:141] libmachine: Using API Version  1
I0416 16:32:48.727024   20432 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:48.727320   20432 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:48.727476   20432 main.go:141] libmachine: (functional-711095) Calling .DriverName
I0416 16:32:48.727669   20432 ssh_runner.go:195] Run: systemctl --version
I0416 16:32:48.727690   20432 main.go:141] libmachine: (functional-711095) Calling .GetSSHHostname
I0416 16:32:48.729994   20432 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:48.730367   20432 main.go:141] libmachine: (functional-711095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:f8", ip: ""} in network mk-functional-711095: {Iface:virbr1 ExpiryTime:2024-04-16 17:29:17 +0000 UTC Type:0 Mac:52:54:00:de:25:f8 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-711095 Clientid:01:52:54:00:de:25:f8}
I0416 16:32:48.730387   20432 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined IP address 192.168.39.157 and MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:48.730539   20432 main.go:141] libmachine: (functional-711095) Calling .GetSSHPort
I0416 16:32:48.730667   20432 main.go:141] libmachine: (functional-711095) Calling .GetSSHKeyPath
I0416 16:32:48.730809   20432 main.go:141] libmachine: (functional-711095) Calling .GetSSHUsername
I0416 16:32:48.730915   20432 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/functional-711095/id_rsa Username:docker}
I0416 16:32:48.817463   20432 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:32:48.893768   20432 main.go:141] libmachine: Making call to close driver server
I0416 16:32:48.893788   20432 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:48.894074   20432 main.go:141] libmachine: (functional-711095) DBG | Closing plugin on server side
I0416 16:32:48.894125   20432 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:48.894142   20432 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:32:48.894155   20432 main.go:141] libmachine: Making call to close driver server
I0416 16:32:48.894166   20432 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:48.894404   20432 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:48.894447   20432 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-711095 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ee65aefea833675d3321922521292cd330a80a0009f6d27a558bf01d7e69d78c","repoDigests":["localhost/minikube-local-cache-test@sha256:cbe4abe73896be053af6df6a1e4ad0057cd38120bec63b185e959f2d9860d74e"],"repoTags":["localhost/minikube-local-cache-test:functional-711095"],"size":"3330"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry
.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"8711e3cf933860481502f5dc347917162ce15083cb5840034f00b0d32033f1c2","repoDigests":["docker.io/library/be95cf9f7e3843695929510e2f612178fa94fde1a84ecf64036965d4240c8eec-tmp@sha256:d26d6c55309606528dedfd3324dc8eb158de670d41caf6ec05f71b8af7829cac"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"adbc95d620acffa7da8c234a3d19b3aa0260a3520c6c759f84d5ec63fff19557",
"repoDigests":["localhost/my-image@sha256:cb3d79f6993056c32568e416dabf1c72adbc9347f78d7419999127f561bd0c25"],"repoTags":["localhost/my-image:functional-711095"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256
:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":["docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1","docker.io/library/nginx@sha256:cd64407576751d9b9ba4924f758d3d39fe76a6e142c32169625b60934c95f057"],"repoTags":["docker.io/library/nginx:latest"],"size":"190874053"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","regi
stry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d
4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-711095"],"size":"34114467"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eac
b7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e9
8765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-711095 image ls --format json --alsologtostderr:
I0416 16:32:48.355595   20358 out.go:291] Setting OutFile to fd 1 ...
I0416 16:32:48.355846   20358 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:48.355882   20358 out.go:304] Setting ErrFile to fd 2...
I0416 16:32:48.355895   20358 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:48.356194   20358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
I0416 16:32:48.357077   20358 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:48.357240   20358 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:48.357835   20358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:48.357898   20358 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:48.375693   20358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
I0416 16:32:48.376404   20358 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:48.376966   20358 main.go:141] libmachine: Using API Version  1
I0416 16:32:48.376988   20358 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:48.377358   20358 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:48.377592   20358 main.go:141] libmachine: (functional-711095) Calling .GetState
I0416 16:32:48.379853   20358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:48.379899   20358 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:48.401931   20358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42263
I0416 16:32:48.402406   20358 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:48.402920   20358 main.go:141] libmachine: Using API Version  1
I0416 16:32:48.402938   20358 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:48.403401   20358 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:48.403559   20358 main.go:141] libmachine: (functional-711095) Calling .DriverName
I0416 16:32:48.403747   20358 ssh_runner.go:195] Run: systemctl --version
I0416 16:32:48.403773   20358 main.go:141] libmachine: (functional-711095) Calling .GetSSHHostname
I0416 16:32:48.406942   20358 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:48.407398   20358 main.go:141] libmachine: (functional-711095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:f8", ip: ""} in network mk-functional-711095: {Iface:virbr1 ExpiryTime:2024-04-16 17:29:17 +0000 UTC Type:0 Mac:52:54:00:de:25:f8 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-711095 Clientid:01:52:54:00:de:25:f8}
I0416 16:32:48.407452   20358 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined IP address 192.168.39.157 and MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:48.407746   20358 main.go:141] libmachine: (functional-711095) Calling .GetSSHPort
I0416 16:32:48.407863   20358 main.go:141] libmachine: (functional-711095) Calling .GetSSHKeyPath
I0416 16:32:48.408114   20358 main.go:141] libmachine: (functional-711095) Calling .GetSSHUsername
I0416 16:32:48.408219   20358 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/functional-711095/id_rsa Username:docker}
I0416 16:32:48.540492   20358 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:32:48.627644   20358 main.go:141] libmachine: Making call to close driver server
I0416 16:32:48.627658   20358 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:48.627995   20358 main.go:141] libmachine: (functional-711095) DBG | Closing plugin on server side
I0416 16:32:48.628045   20358 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:48.628058   20358 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:32:48.628078   20358 main.go:141] libmachine: Making call to close driver server
I0416 16:32:48.628110   20358 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:48.628390   20358 main.go:141] libmachine: (functional-711095) DBG | Closing plugin on server side
I0416 16:32:48.628391   20358 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:48.628415   20358 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-711095 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-711095
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ee65aefea833675d3321922521292cd330a80a0009f6d27a558bf01d7e69d78c
repoDigests:
- localhost/minikube-local-cache-test@sha256:cbe4abe73896be053af6df6a1e4ad0057cd38120bec63b185e959f2d9860d74e
repoTags:
- localhost/minikube-local-cache-test:functional-711095
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests:
- docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1
- docker.io/library/nginx@sha256:cd64407576751d9b9ba4924f758d3d39fe76a6e142c32169625b60934c95f057
repoTags:
- docker.io/library/nginx:latest
size: "190874053"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-711095 image ls --format yaml --alsologtostderr:
I0416 16:32:43.994589   20105 out.go:291] Setting OutFile to fd 1 ...
I0416 16:32:43.994700   20105 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:43.994706   20105 out.go:304] Setting ErrFile to fd 2...
I0416 16:32:43.994712   20105 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:43.994929   20105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
I0416 16:32:43.995550   20105 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:43.995659   20105 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:43.996108   20105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:43.996163   20105 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:44.010878   20105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33887
I0416 16:32:44.011342   20105 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:44.011925   20105 main.go:141] libmachine: Using API Version  1
I0416 16:32:44.011953   20105 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:44.012235   20105 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:44.012459   20105 main.go:141] libmachine: (functional-711095) Calling .GetState
I0416 16:32:44.014168   20105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:44.014205   20105 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:44.030118   20105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
I0416 16:32:44.030505   20105 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:44.031005   20105 main.go:141] libmachine: Using API Version  1
I0416 16:32:44.031029   20105 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:44.031326   20105 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:44.031533   20105 main.go:141] libmachine: (functional-711095) Calling .DriverName
I0416 16:32:44.031731   20105 ssh_runner.go:195] Run: systemctl --version
I0416 16:32:44.031755   20105 main.go:141] libmachine: (functional-711095) Calling .GetSSHHostname
I0416 16:32:44.034867   20105 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:44.035321   20105 main.go:141] libmachine: (functional-711095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:f8", ip: ""} in network mk-functional-711095: {Iface:virbr1 ExpiryTime:2024-04-16 17:29:17 +0000 UTC Type:0 Mac:52:54:00:de:25:f8 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-711095 Clientid:01:52:54:00:de:25:f8}
I0416 16:32:44.035352   20105 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined IP address 192.168.39.157 and MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:44.035485   20105 main.go:141] libmachine: (functional-711095) Calling .GetSSHPort
I0416 16:32:44.035609   20105 main.go:141] libmachine: (functional-711095) Calling .GetSSHKeyPath
I0416 16:32:44.035747   20105 main.go:141] libmachine: (functional-711095) Calling .GetSSHUsername
I0416 16:32:44.035912   20105 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/functional-711095/id_rsa Username:docker}
I0416 16:32:44.149555   20105 ssh_runner.go:195] Run: sudo crictl images --output json
I0416 16:32:44.215480   20105 main.go:141] libmachine: Making call to close driver server
I0416 16:32:44.215498   20105 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:44.215763   20105 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:44.215779   20105 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:32:44.215800   20105 main.go:141] libmachine: Making call to close driver server
I0416 16:32:44.215807   20105 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:44.216145   20105 main.go:141] libmachine: (functional-711095) DBG | Closing plugin on server side
I0416 16:32:44.216133   20105 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:44.216167   20105 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh pgrep buildkitd: exit status 1 (229.82857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image build -t localhost/my-image:functional-711095 testdata/build --alsologtostderr
E0416 16:32:44.851853   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image build -t localhost/my-image:functional-711095 testdata/build --alsologtostderr: (3.14347421s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-711095 image build -t localhost/my-image:functional-711095 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8711e3cf933
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-711095
--> adbc95d620a
Successfully tagged localhost/my-image:functional-711095
adbc95d620acffa7da8c234a3d19b3aa0260a3520c6c759f84d5ec63fff19557
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-711095 image build -t localhost/my-image:functional-711095 testdata/build --alsologtostderr:
I0416 16:32:44.504609   20173 out.go:291] Setting OutFile to fd 1 ...
I0416 16:32:44.504749   20173 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:44.504758   20173 out.go:304] Setting ErrFile to fd 2...
I0416 16:32:44.504762   20173 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0416 16:32:44.504972   20173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
I0416 16:32:44.505520   20173 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:44.506132   20173 config.go:182] Loaded profile config "functional-711095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0416 16:32:44.506485   20173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:44.506544   20173 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:44.521903   20173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46787
I0416 16:32:44.522374   20173 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:44.522932   20173 main.go:141] libmachine: Using API Version  1
I0416 16:32:44.522958   20173 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:44.523319   20173 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:44.523513   20173 main.go:141] libmachine: (functional-711095) Calling .GetState
I0416 16:32:44.525340   20173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0416 16:32:44.525386   20173 main.go:141] libmachine: Launching plugin server for driver kvm2
I0416 16:32:44.540593   20173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
I0416 16:32:44.541113   20173 main.go:141] libmachine: () Calling .GetVersion
I0416 16:32:44.541624   20173 main.go:141] libmachine: Using API Version  1
I0416 16:32:44.541684   20173 main.go:141] libmachine: () Calling .SetConfigRaw
I0416 16:32:44.542081   20173 main.go:141] libmachine: () Calling .GetMachineName
I0416 16:32:44.542313   20173 main.go:141] libmachine: (functional-711095) Calling .DriverName
I0416 16:32:44.542558   20173 ssh_runner.go:195] Run: systemctl --version
I0416 16:32:44.542587   20173 main.go:141] libmachine: (functional-711095) Calling .GetSSHHostname
I0416 16:32:44.545549   20173 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:44.545945   20173 main.go:141] libmachine: (functional-711095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:25:f8", ip: ""} in network mk-functional-711095: {Iface:virbr1 ExpiryTime:2024-04-16 17:29:17 +0000 UTC Type:0 Mac:52:54:00:de:25:f8 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:functional-711095 Clientid:01:52:54:00:de:25:f8}
I0416 16:32:44.545985   20173 main.go:141] libmachine: (functional-711095) DBG | domain functional-711095 has defined IP address 192.168.39.157 and MAC address 52:54:00:de:25:f8 in network mk-functional-711095
I0416 16:32:44.546146   20173 main.go:141] libmachine: (functional-711095) Calling .GetSSHPort
I0416 16:32:44.546309   20173 main.go:141] libmachine: (functional-711095) Calling .GetSSHKeyPath
I0416 16:32:44.546460   20173 main.go:141] libmachine: (functional-711095) Calling .GetSSHUsername
I0416 16:32:44.546621   20173 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/functional-711095/id_rsa Username:docker}
I0416 16:32:44.699394   20173 build_images.go:161] Building image from path: /tmp/build.1503287551.tar
I0416 16:32:44.699464   20173 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0416 16:32:44.748360   20173 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1503287551.tar
I0416 16:32:44.778396   20173 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1503287551.tar: stat -c "%s %y" /var/lib/minikube/build/build.1503287551.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1503287551.tar': No such file or directory
I0416 16:32:44.778436   20173 ssh_runner.go:362] scp /tmp/build.1503287551.tar --> /var/lib/minikube/build/build.1503287551.tar (3072 bytes)
I0416 16:32:44.882478   20173 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1503287551
I0416 16:32:44.958545   20173 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1503287551 -xf /var/lib/minikube/build/build.1503287551.tar
I0416 16:32:45.040408   20173 crio.go:315] Building image: /var/lib/minikube/build/build.1503287551
I0416 16:32:45.040474   20173 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-711095 /var/lib/minikube/build/build.1503287551 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0416 16:32:47.518923   20173 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-711095 /var/lib/minikube/build/build.1503287551 --cgroup-manager=cgroupfs: (2.4784275s)
I0416 16:32:47.519005   20173 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1503287551
I0416 16:32:47.555414   20173 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1503287551.tar
I0416 16:32:47.588576   20173 build_images.go:217] Built localhost/my-image:functional-711095 from /tmp/build.1503287551.tar
I0416 16:32:47.588609   20173 build_images.go:133] succeeded building to: functional-711095
I0416 16:32:47.588614   20173 build_images.go:134] failed building to: 
I0416 16:32:47.588637   20173 main.go:141] libmachine: Making call to close driver server
I0416 16:32:47.588649   20173 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:47.588916   20173 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:47.588936   20173 main.go:141] libmachine: Making call to close connection to plugin binary
I0416 16:32:47.588947   20173 main.go:141] libmachine: Making call to close driver server
I0416 16:32:47.588956   20173 main.go:141] libmachine: (functional-711095) Calling .Close
I0416 16:32:47.589281   20173 main.go:141] libmachine: Successfully made call to close driver server
I0416 16:32:47.589290   20173 main.go:141] libmachine: (functional-711095) DBG | Closing plugin on server side
I0416 16:32:47.589307   20173 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.003011843s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-711095
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image load --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image load --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr: (6.442151334s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image load --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image load --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr: (2.792932144s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-711095 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-711095 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-c497q" [36cee026-1ba7-4365-9a07-4fb4b3978c79] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-c497q" [36cee026-1ba7-4365-9a07-4fb4b3978c79] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.005598907s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image save gcr.io/google-containers/addon-resizer:functional-711095 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image save gcr.io/google-containers/addon-resizer:functional-711095 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.051520023s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image rm gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.432395301s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 service list -o json
functional_test.go:1490: Took "547.97349ms" to run "out/minikube-linux-amd64 -p functional-711095 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.157:30378
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-711095
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 image save --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-711095 image save --daemon gcr.io/google-containers/addon-resizer:functional-711095 --alsologtostderr: (1.33660255s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-711095
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.157:30378
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdany-port3712117872/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713285161491866911" to /tmp/TestFunctionalparallelMountCmdany-port3712117872/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713285161491866911" to /tmp/TestFunctionalparallelMountCmdany-port3712117872/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713285161491866911" to /tmp/TestFunctionalparallelMountCmdany-port3712117872/001/test-1713285161491866911
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (265.500509ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 16 16:32 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 16 16:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 16 16:32 test-1713285161491866911
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh cat /mount-9p/test-1713285161491866911
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-711095 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ba9a869e-6ad8-442b-85d9-67605e5356a2] Pending
helpers_test.go:344: "busybox-mount" [ba9a869e-6ad8-442b-85d9-67605e5356a2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ba9a869e-6ad8-442b-85d9-67605e5356a2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ba9a869e-6ad8-442b-85d9-67605e5356a2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006681094s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-711095 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdany-port3712117872/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "293.93595ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "69.341341ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "269.117101ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "57.583906ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdspecific-port3426616441/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.306668ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdspecific-port3426616441/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh "sudo umount -f /mount-9p": exit status 1 (235.030888ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-711095 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdspecific-port3426616441/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4210437226/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4210437226/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4210437226/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T" /mount1: exit status 1 (267.123162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-711095 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-711095 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4210437226/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4210437226/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-711095 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4210437226/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-711095
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-711095
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-711095
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-543552 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0416 16:33:25.812582   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:34:47.733052   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-543552 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.636550805s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-543552 -- rollout status deployment/busybox: (2.814578255s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-2prpr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-7wbjg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-zmcc2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-2prpr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-7wbjg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-zmcc2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-2prpr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-7wbjg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-zmcc2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-2prpr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-2prpr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-7wbjg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-7wbjg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-zmcc2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-543552 -- exec busybox-7fdf7869d9-zmcc2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-543552 -v=7 --alsologtostderr
E0416 16:37:03.890054   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:37:10.030836   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:10.036135   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:10.046428   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:10.066721   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:10.107022   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:10.188039   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:10.348696   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:10.669372   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:11.310349   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:37:12.591072   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-543552 -v=7 --alsologtostderr: (46.455538334s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-543552 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0416 16:37:15.151861   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp testdata/cp-test.txt ha-543552:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552:/home/docker/cp-test.txt ha-543552-m02:/home/docker/cp-test_ha-543552_ha-543552-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test_ha-543552_ha-543552-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552:/home/docker/cp-test.txt ha-543552-m03:/home/docker/cp-test_ha-543552_ha-543552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test_ha-543552_ha-543552-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552:/home/docker/cp-test.txt ha-543552-m04:/home/docker/cp-test_ha-543552_ha-543552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test_ha-543552_ha-543552-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp testdata/cp-test.txt ha-543552-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m02.txt
E0416 16:37:20.272368   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m02:/home/docker/cp-test.txt ha-543552:/home/docker/cp-test_ha-543552-m02_ha-543552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test_ha-543552-m02_ha-543552.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m02:/home/docker/cp-test.txt ha-543552-m03:/home/docker/cp-test_ha-543552-m02_ha-543552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test_ha-543552-m02_ha-543552-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m02:/home/docker/cp-test.txt ha-543552-m04:/home/docker/cp-test_ha-543552-m02_ha-543552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test_ha-543552-m02_ha-543552-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp testdata/cp-test.txt ha-543552-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt ha-543552:/home/docker/cp-test_ha-543552-m03_ha-543552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test_ha-543552-m03_ha-543552.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt ha-543552-m02:/home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test_ha-543552-m03_ha-543552-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m03:/home/docker/cp-test.txt ha-543552-m04:/home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test_ha-543552-m03_ha-543552-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp testdata/cp-test.txt ha-543552-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1130197747/001/cp-test_ha-543552-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt ha-543552:/home/docker/cp-test_ha-543552-m04_ha-543552.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552 "sudo cat /home/docker/cp-test_ha-543552-m04_ha-543552.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt ha-543552-m02:/home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m02 "sudo cat /home/docker/cp-test_ha-543552-m04_ha-543552-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 cp ha-543552-m04:/home/docker/cp-test.txt ha-543552-m03:/home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 ssh -n ha-543552-m03 "sudo cat /home/docker/cp-test_ha-543552-m04_ha-543552-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0416 16:39:53.875052   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.470147045s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-543552 node delete m03 -v=7 --alsologtostderr: (16.883092627s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (377.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-543552 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0416 16:52:03.892956   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:52:10.030259   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 16:53:33.076977   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-543552 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m16.148891467s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (377.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-543552 --control-plane -v=7 --alsologtostderr
E0416 16:57:03.891059   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 16:57:10.030410   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-543552 --control-plane -v=7 --alsologtostderr: (1m13.0323373s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-543552 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-826760 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-826760 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.301566058s)
--- PASS: TestJSONOutput/start/Command (60.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-826760 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-826760 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.44s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-826760 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-826760 --output=json --user=testUser: (7.444606176s)
--- PASS: TestJSONOutput/stop/Command (7.44s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-286137 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-286137 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.35121ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5c74ccb8-60a5-4ca6-a674-3246de3858f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-286137] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"531534c7-a479-4b2b-9598-6a88b1c2ef6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18649"}}
	{"specversion":"1.0","id":"f09b9e13-e28c-4248-92ee-03ddad8814d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f5dff7b3-3a3b-4a0f-bbe9-1f54f32b633d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig"}}
	{"specversion":"1.0","id":"8d5d6f46-d305-4ded-962f-2925a464a4be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube"}}
	{"specversion":"1.0","id":"811b0f0d-b383-4c85-8931-7e914e11a9fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"647681e3-fc40-4454-a581-fc593bd7ee28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0afb5a5b-c8e4-40cf-bb65-e87682eae06a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-286137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-286137
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (95.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-676626 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-676626 --driver=kvm2  --container-runtime=crio: (43.908909537s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-679523 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-679523 --driver=kvm2  --container-runtime=crio: (48.69313323s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-676626
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-679523
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-679523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-679523
helpers_test.go:175: Cleaning up "first-676626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-676626
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-676626: (1.003701599s)
--- PASS: TestMinikubeProfile (95.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-019354 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-019354 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.013237733s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-019354 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-019354 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-034450 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-034450 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.564369909s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-034450 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-034450 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-019354 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-034450 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-034450 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-034450
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-034450: (1.408766167s)
--- PASS: TestMountStart/serial/Stop (1.41s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-034450
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-034450: (22.135766218s)
--- PASS: TestMountStart/serial/RestartStopped (23.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-034450 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-034450 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334221 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0416 17:02:03.890270   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:02:10.030748   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334221 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m42.705241505s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-334221 -- rollout status deployment/busybox: (2.664557902s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-fn86w -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-tzz4s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-fn86w -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-tzz4s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-fn86w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-tzz4s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-fn86w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-fn86w -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-tzz4s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334221 -- exec busybox-7fdf7869d9-tzz4s -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-334221 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-334221 -v 3 --alsologtostderr: (40.911055732s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-334221 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp testdata/cp-test.txt multinode-334221:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3051956935/001/cp-test_multinode-334221.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221:/home/docker/cp-test.txt multinode-334221-m02:/home/docker/cp-test_multinode-334221_multinode-334221-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m02 "sudo cat /home/docker/cp-test_multinode-334221_multinode-334221-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221:/home/docker/cp-test.txt multinode-334221-m03:/home/docker/cp-test_multinode-334221_multinode-334221-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m03 "sudo cat /home/docker/cp-test_multinode-334221_multinode-334221-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp testdata/cp-test.txt multinode-334221-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3051956935/001/cp-test_multinode-334221-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt multinode-334221:/home/docker/cp-test_multinode-334221-m02_multinode-334221.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221 "sudo cat /home/docker/cp-test_multinode-334221-m02_multinode-334221.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221-m02:/home/docker/cp-test.txt multinode-334221-m03:/home/docker/cp-test_multinode-334221-m02_multinode-334221-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m03 "sudo cat /home/docker/cp-test_multinode-334221-m02_multinode-334221-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp testdata/cp-test.txt multinode-334221-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3051956935/001/cp-test_multinode-334221-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt multinode-334221:/home/docker/cp-test_multinode-334221-m03_multinode-334221.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221 "sudo cat /home/docker/cp-test_multinode-334221-m03_multinode-334221.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 cp multinode-334221-m03:/home/docker/cp-test.txt multinode-334221-m02:/home/docker/cp-test_multinode-334221-m03_multinode-334221-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 ssh -n multinode-334221-m02 "sudo cat /home/docker/cp-test_multinode-334221-m03_multinode-334221-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-334221 node stop m03: (2.30699376s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334221 status: exit status 7 (437.450032ms)

                                                
                                                
-- stdout --
	multinode-334221
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334221-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-334221-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334221 status --alsologtostderr: exit status 7 (433.52886ms)

                                                
                                                
-- stdout --
	multinode-334221
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334221-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-334221-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:04:41.046912   37862 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:04:41.047025   37862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:04:41.047049   37862 out.go:304] Setting ErrFile to fd 2...
	I0416 17:04:41.047054   37862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:04:41.047627   37862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:04:41.047973   37862 out.go:298] Setting JSON to false
	I0416 17:04:41.048011   37862 mustload.go:65] Loading cluster: multinode-334221
	I0416 17:04:41.048644   37862 notify.go:220] Checking for updates...
	I0416 17:04:41.049035   37862 config.go:182] Loaded profile config "multinode-334221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 17:04:41.049054   37862 status.go:255] checking status of multinode-334221 ...
	I0416 17:04:41.049473   37862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:04:41.049512   37862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:04:41.065663   37862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0416 17:04:41.066109   37862 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:04:41.066717   37862 main.go:141] libmachine: Using API Version  1
	I0416 17:04:41.066755   37862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:04:41.067072   37862 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:04:41.067291   37862 main.go:141] libmachine: (multinode-334221) Calling .GetState
	I0416 17:04:41.069048   37862 status.go:330] multinode-334221 host status = "Running" (err=<nil>)
	I0416 17:04:41.069071   37862 host.go:66] Checking if "multinode-334221" exists ...
	I0416 17:04:41.069420   37862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:04:41.069462   37862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:04:41.084656   37862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0416 17:04:41.085080   37862 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:04:41.085580   37862 main.go:141] libmachine: Using API Version  1
	I0416 17:04:41.085600   37862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:04:41.085862   37862 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:04:41.086030   37862 main.go:141] libmachine: (multinode-334221) Calling .GetIP
	I0416 17:04:41.088716   37862 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:04:41.089199   37862 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:04:41.089237   37862 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:04:41.089380   37862 host.go:66] Checking if "multinode-334221" exists ...
	I0416 17:04:41.089695   37862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:04:41.089763   37862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:04:41.104500   37862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0416 17:04:41.104867   37862 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:04:41.105300   37862 main.go:141] libmachine: Using API Version  1
	I0416 17:04:41.105319   37862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:04:41.105610   37862 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:04:41.105787   37862 main.go:141] libmachine: (multinode-334221) Calling .DriverName
	I0416 17:04:41.105976   37862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:04:41.105995   37862 main.go:141] libmachine: (multinode-334221) Calling .GetSSHHostname
	I0416 17:04:41.108753   37862 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:04:41.109228   37862 main.go:141] libmachine: (multinode-334221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:c2:6e", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:02:16 +0000 UTC Type:0 Mac:52:54:00:d1:c2:6e Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-334221 Clientid:01:52:54:00:d1:c2:6e}
	I0416 17:04:41.109254   37862 main.go:141] libmachine: (multinode-334221) DBG | domain multinode-334221 has defined IP address 192.168.39.137 and MAC address 52:54:00:d1:c2:6e in network mk-multinode-334221
	I0416 17:04:41.109417   37862 main.go:141] libmachine: (multinode-334221) Calling .GetSSHPort
	I0416 17:04:41.109559   37862 main.go:141] libmachine: (multinode-334221) Calling .GetSSHKeyPath
	I0416 17:04:41.109694   37862 main.go:141] libmachine: (multinode-334221) Calling .GetSSHUsername
	I0416 17:04:41.109837   37862 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221/id_rsa Username:docker}
	I0416 17:04:41.189412   37862 ssh_runner.go:195] Run: systemctl --version
	I0416 17:04:41.196849   37862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:04:41.213916   37862 kubeconfig.go:125] found "multinode-334221" server: "https://192.168.39.137:8443"
	I0416 17:04:41.213946   37862 api_server.go:166] Checking apiserver status ...
	I0416 17:04:41.213990   37862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 17:04:41.228417   37862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup
	W0416 17:04:41.238496   37862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 17:04:41.238574   37862 ssh_runner.go:195] Run: ls
	I0416 17:04:41.243807   37862 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0416 17:04:41.248214   37862 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0416 17:04:41.248241   37862 status.go:422] multinode-334221 apiserver status = Running (err=<nil>)
	I0416 17:04:41.248268   37862 status.go:257] multinode-334221 status: &{Name:multinode-334221 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:04:41.248293   37862 status.go:255] checking status of multinode-334221-m02 ...
	I0416 17:04:41.248564   37862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:04:41.248603   37862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:04:41.264947   37862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0416 17:04:41.265307   37862 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:04:41.265705   37862 main.go:141] libmachine: Using API Version  1
	I0416 17:04:41.265731   37862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:04:41.266025   37862 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:04:41.266206   37862 main.go:141] libmachine: (multinode-334221-m02) Calling .GetState
	I0416 17:04:41.267653   37862 status.go:330] multinode-334221-m02 host status = "Running" (err=<nil>)
	I0416 17:04:41.267667   37862 host.go:66] Checking if "multinode-334221-m02" exists ...
	I0416 17:04:41.267944   37862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:04:41.267978   37862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:04:41.283159   37862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0416 17:04:41.283597   37862 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:04:41.284039   37862 main.go:141] libmachine: Using API Version  1
	I0416 17:04:41.284055   37862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:04:41.284386   37862 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:04:41.284601   37862 main.go:141] libmachine: (multinode-334221-m02) Calling .GetIP
	I0416 17:04:41.287137   37862 main.go:141] libmachine: (multinode-334221-m02) DBG | domain multinode-334221-m02 has defined MAC address 52:54:00:31:c9:02 in network mk-multinode-334221
	I0416 17:04:41.287530   37862 main.go:141] libmachine: (multinode-334221-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:c9:02", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:03:17 +0000 UTC Type:0 Mac:52:54:00:31:c9:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-334221-m02 Clientid:01:52:54:00:31:c9:02}
	I0416 17:04:41.287567   37862 main.go:141] libmachine: (multinode-334221-m02) DBG | domain multinode-334221-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:31:c9:02 in network mk-multinode-334221
	I0416 17:04:41.287620   37862 host.go:66] Checking if "multinode-334221-m02" exists ...
	I0416 17:04:41.287913   37862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:04:41.287947   37862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:04:41.302207   37862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0416 17:04:41.302558   37862 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:04:41.302950   37862 main.go:141] libmachine: Using API Version  1
	I0416 17:04:41.302966   37862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:04:41.303277   37862 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:04:41.303457   37862 main.go:141] libmachine: (multinode-334221-m02) Calling .DriverName
	I0416 17:04:41.303604   37862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 17:04:41.303621   37862 main.go:141] libmachine: (multinode-334221-m02) Calling .GetSSHHostname
	I0416 17:04:41.306257   37862 main.go:141] libmachine: (multinode-334221-m02) DBG | domain multinode-334221-m02 has defined MAC address 52:54:00:31:c9:02 in network mk-multinode-334221
	I0416 17:04:41.306645   37862 main.go:141] libmachine: (multinode-334221-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:c9:02", ip: ""} in network mk-multinode-334221: {Iface:virbr1 ExpiryTime:2024-04-16 18:03:17 +0000 UTC Type:0 Mac:52:54:00:31:c9:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-334221-m02 Clientid:01:52:54:00:31:c9:02}
	I0416 17:04:41.306687   37862 main.go:141] libmachine: (multinode-334221-m02) DBG | domain multinode-334221-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:31:c9:02 in network mk-multinode-334221
	I0416 17:04:41.306813   37862 main.go:141] libmachine: (multinode-334221-m02) Calling .GetSSHPort
	I0416 17:04:41.306957   37862 main.go:141] libmachine: (multinode-334221-m02) Calling .GetSSHKeyPath
	I0416 17:04:41.307094   37862 main.go:141] libmachine: (multinode-334221-m02) Calling .GetSSHUsername
	I0416 17:04:41.307224   37862 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18649-3628/.minikube/machines/multinode-334221-m02/id_rsa Username:docker}
	I0416 17:04:41.393283   37862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 17:04:41.411017   37862 status.go:257] multinode-334221-m02 status: &{Name:multinode-334221-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 17:04:41.411046   37862 status.go:255] checking status of multinode-334221-m03 ...
	I0416 17:04:41.411336   37862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 17:04:41.411370   37862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 17:04:41.426944   37862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0416 17:04:41.427328   37862 main.go:141] libmachine: () Calling .GetVersion
	I0416 17:04:41.427782   37862 main.go:141] libmachine: Using API Version  1
	I0416 17:04:41.427829   37862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 17:04:41.428103   37862 main.go:141] libmachine: () Calling .GetMachineName
	I0416 17:04:41.428285   37862 main.go:141] libmachine: (multinode-334221-m03) Calling .GetState
	I0416 17:04:41.429576   37862 status.go:330] multinode-334221-m03 host status = "Stopped" (err=<nil>)
	I0416 17:04:41.429591   37862 status.go:343] host is not running, skipping remaining checks
	I0416 17:04:41.429599   37862 status.go:257] multinode-334221-m03 status: &{Name:multinode-334221-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 node start m03 -v=7 --alsologtostderr
E0416 17:05:06.935821   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-334221 node start m03 -v=7 --alsologtostderr: (27.990069496s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-334221 node delete m03: (1.850102846s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (165.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334221 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334221 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m44.526618035s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334221 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (165.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334221
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334221-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-334221-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.990028ms)

                                                
                                                
-- stdout --
	* [multinode-334221-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-334221-m02' is duplicated with machine name 'multinode-334221-m02' in profile 'multinode-334221'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334221-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334221-m03 --driver=kvm2  --container-runtime=crio: (47.841603745s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-334221
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-334221: exit status 80 (222.237454ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-334221 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-334221-m03 already exists in multinode-334221-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-334221-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.17s)

                                                
                                    
x
+
TestScheduledStopUnix (118.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-256456 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-256456 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.443971666s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256456 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-256456 -n scheduled-stop-256456
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256456 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256456 --cancel-scheduled
E0416 17:21:46.937580   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:22:03.892800   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256456 -n scheduled-stop-256456
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-256456
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-256456 --schedule 15s
E0416 17:22:10.029896   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-256456
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-256456: exit status 7 (81.879682ms)

                                                
                                                
-- stdout --
	scheduled-stop-256456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256456 -n scheduled-stop-256456
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-256456 -n scheduled-stop-256456: exit status 7 (75.032994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-256456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-256456
--- PASS: TestScheduledStopUnix (118.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1031193 start -p running-upgrade-512504 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1031193 start -p running-upgrade-512504 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m11.414713264s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-512504 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-512504 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.037485342s)
helpers_test.go:175: Cleaning up "running-upgrade-512504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-512504
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-512504: (1.032327035s)
--- PASS: TestRunningBinaryUpgrade (226.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-496544 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-496544 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (92.533574ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-496544] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (99.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-496544 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-496544 --driver=kvm2  --container-runtime=crio: (1m39.187426158s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-496544 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (99.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-496544 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-496544 --no-kubernetes --driver=kvm2  --container-runtime=crio: (7.543200312s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-496544 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-496544 status -o json: exit status 2 (259.590978ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-496544","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-496544
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-496544: (1.287269094s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-726705 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-726705 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.842475ms)

                                                
                                                
-- stdout --
	* [false-726705] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 17:24:37.456400   47161 out.go:291] Setting OutFile to fd 1 ...
	I0416 17:24:37.456529   47161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:24:37.456539   47161 out.go:304] Setting ErrFile to fd 2...
	I0416 17:24:37.456545   47161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 17:24:37.456739   47161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18649-3628/.minikube/bin
	I0416 17:24:37.457403   47161 out.go:298] Setting JSON to false
	I0416 17:24:37.458327   47161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4029,"bootTime":1713284248,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 17:24:37.458393   47161 start.go:139] virtualization: kvm guest
	I0416 17:24:37.460633   47161 out.go:177] * [false-726705] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 17:24:37.462093   47161 out.go:177]   - MINIKUBE_LOCATION=18649
	I0416 17:24:37.462096   47161 notify.go:220] Checking for updates...
	I0416 17:24:37.464626   47161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 17:24:37.466039   47161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18649-3628/kubeconfig
	I0416 17:24:37.467328   47161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18649-3628/.minikube
	I0416 17:24:37.468511   47161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 17:24:37.470277   47161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 17:24:37.471924   47161 config.go:182] Loaded profile config "NoKubernetes-496544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0416 17:24:37.472033   47161 config.go:182] Loaded profile config "old-k8s-version-795352": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 17:24:37.472124   47161 config.go:182] Loaded profile config "running-upgrade-512504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0416 17:24:37.472224   47161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 17:24:37.507224   47161 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 17:24:37.508619   47161 start.go:297] selected driver: kvm2
	I0416 17:24:37.508634   47161 start.go:901] validating driver "kvm2" against <nil>
	I0416 17:24:37.508643   47161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 17:24:37.510611   47161 out.go:177] 
	W0416 17:24:37.511847   47161 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0416 17:24:37.513235   47161 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-726705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-726705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:24:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.61.200:8443
name: NoKubernetes-496544
contexts:
- context:
cluster: NoKubernetes-496544
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:24:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: NoKubernetes-496544
name: NoKubernetes-496544
current-context: NoKubernetes-496544
kind: Config
preferences: {}
users:
- name: NoKubernetes-496544
user:
client-certificate: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/NoKubernetes-496544/client.crt
client-key: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/NoKubernetes-496544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-726705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-726705"

                                                
                                                
----------------------- debugLogs end: false-726705 [took: 3.031793138s] --------------------------------
helpers_test.go:175: Cleaning up "false-726705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-726705
--- PASS: TestNetworkPlugins/group/false (3.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-496544 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-496544 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.759951559s)
--- PASS: TestNoKubernetes/serial/Start (30.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-496544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-496544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.779547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-496544
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-496544: (1.404897669s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (48.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-496544 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-496544 --driver=kvm2  --container-runtime=crio: (48.341474411s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (48.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-496544 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-496544 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.873762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-368813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0416 17:27:03.892333   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:27:10.029939   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-368813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (1m53.375336336s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-512869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-512869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m3.956766215s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-368813 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6fd4562b-26b6-4741-b9cd-d8c0939509ba] Pending
helpers_test.go:344: "busybox" [6fd4562b-26b6-4741-b9cd-d8c0939509ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6fd4562b-26b6-4741-b9cd-d8c0939509ba] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004289308s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-368813 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-512869 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [99972ace-e47e-4cf7-aa34-cbf7650fe647] Pending
helpers_test.go:344: "busybox" [99972ace-e47e-4cf7-aa34-cbf7650fe647] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [99972ace-e47e-4cf7-aa34-cbf7650fe647] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005174498s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-512869 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-368813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-368813 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-512869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-512869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019140837s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-512869 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-795352 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-795352 --alsologtostderr -v=3: (5.293338481s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-795352 -n old-k8s-version-795352: exit status 7 (74.668781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-795352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (629.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-368813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-368813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (10m28.787403124s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368813 -n no-preload-368813
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (629.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (621.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-512869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0416 17:32:03.890180   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:32:10.030242   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-512869 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (10m21.511550852s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512869 -n embed-certs-512869
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (621.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1658746493 start -p stopped-upgrade-446675 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1658746493 start -p stopped-upgrade-446675 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (55.027548526s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1658746493 -p stopped-upgrade-446675 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1658746493 -p stopped-upgrade-446675 stop: (2.139942362s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-446675 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-446675 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.473585217s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-446675
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (98.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-970622 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-970622 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.322113743s)
--- PASS: TestPause/serial/Start (98.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-304316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-304316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m6.781005232s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1a9c9801-d038-4949-b606-aedc34d1eeae] Pending
helpers_test.go:344: "busybox" [1a9c9801-d038-4949-b606-aedc34d1eeae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1a9c9801-d038-4949-b606-aedc34d1eeae] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004958442s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-304316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-304316 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (639.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-304316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-304316 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (10m39.220544006s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304316 -n default-k8s-diff-port-304316
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (639.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-721109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0416 17:53:23.829259   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:23.835377   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:23.845653   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:23.865944   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:23.906345   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:23.986663   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:24.147102   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:24.467506   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:25.108045   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:26.389257   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:28.949785   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:34.070415   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:53:44.311398   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-721109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (1m1.356882488s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-721109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-721109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.130757858s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-721109 --alsologtostderr -v=3
E0416 17:54:04.791931   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-721109 --alsologtostderr -v=3: (7.352907406s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-721109 -n newest-cni-721109
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-721109 -n newest-cni-721109: exit status 7 (79.044599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-721109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (43.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-721109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0416 17:54:45.752421   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-721109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (42.818580129s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-721109 -n newest-cni-721109
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (43.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-721109 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-721109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-721109 -n newest-cni-721109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-721109 -n newest-cni-721109: exit status 2 (266.450043ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-721109 -n newest-cni-721109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-721109 -n newest-cni-721109: exit status 2 (264.834934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-721109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-721109 -n newest-cni-721109
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-721109 -n newest-cni-721109
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0416 17:55:06.938728   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.224495494s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-726705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-726705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bvvj4" [5a991e19-279d-4bd6-83bd-6ab6b347c770] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bvvj4" [5a991e19-279d-4bd6-83bd-6ab6b347c770] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004100932s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-726705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m22.131739191s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0416 17:57:03.889914   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 17:57:10.030890   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m5.224818007s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dqtxx" [fac7b9b2-afca-49e6-bf32-d99a4c1103b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005160888s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m1.202227072s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-726705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-726705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ct8wd" [265f525a-c543-43eb-be25-c8fdf7f1ed24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ct8wd" [265f525a-c543-43eb-be25-c8fdf7f1ed24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005429896s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-726705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-726705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-726705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dzz5j" [1440c3fd-dc0e-4fad-9360-860eb0159955] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dzz5j" [1440c3fd-dc0e-4fad-9360-860eb0159955] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005086358s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-726705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0416 17:58:23.829394   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m32.919774517s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0416 17:58:47.733958   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:47.739301   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:47.749597   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:47.769876   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:47.810875   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:47.891242   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:48.052224   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:48.372604   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:49.013269   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m23.585970304s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-726705 "pgrep -a kubelet"
E0416 17:58:50.293953   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-726705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z65m6" [c506d1c1-5486-4ead-8bf8-00c88298ef1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0416 17:58:51.514352   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 17:58:52.854584   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 17:58:57.975781   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z65m6" [c506d1c1-5486-4ead-8bf8-00c88298ef1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.005351133s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-726705 exec deployment/netcat -- nslookup kubernetes.default
E0416 17:59:08.216039   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-726705 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.250580708s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-726705 exec deployment/netcat -- nslookup kubernetes.default
E0416 17:59:28.696262   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context bridge-726705 exec deployment/netcat -- nslookup kubernetes.default: (10.202999748s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-726705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.070476499s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bkzqr" [d3f35563-9a63-434f-b6e2-c15aecd262f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006269208s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5vb2l" [4799cba0-132a-44b3-9481-193b7258ced4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006901668s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-726705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-726705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5ss5k" [abdd69ce-ea80-49c8-a2f3-bc40609443f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5ss5k" [abdd69ce-ea80-49c8-a2f3-bc40609443f0] Running
E0416 18:00:09.656768   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004339669s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-726705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-726705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fhvm2" [b7840a06-4633-4b65-aa21-21b4d7eb3eeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fhvm2" [b7840a06-4633-4b65-aa21-21b4d7eb3eeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004206412s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-726705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-726705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-726705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-726705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rt9qx" [dbb3ce32-d056-4f2c-bee3-16dc0386342e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rt9qx" [dbb3ce32-d056-4f2c-bee3-16dc0386342e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004783794s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-726705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-726705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)
E0416 18:02:03.890198   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/addons-320546/client.crt: no such file or directory
E0416 18:02:10.029882   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/functional-711095/client.crt: no such file or directory
E0416 18:02:18.651736   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/auto-726705/client.crt: no such file or directory
E0416 18:02:44.930140   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:44.935459   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:44.945724   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:44.966059   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:45.006406   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:45.086825   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:45.247230   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:45.567597   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:46.208547   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:47.489599   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:50.050171   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:02:55.170759   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:03:03.936169   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:03.941431   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:03.951678   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:03.971910   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:04.012166   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:04.092504   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:04.252901   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:04.573266   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:05.213596   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:05.411330   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:03:06.494147   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:09.054566   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:14.174919   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:23.828993   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/old-k8s-version-795352/client.crt: no such file or directory
E0416 18:03:24.415668   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:25.891903   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:03:40.572878   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/auto-726705/client.crt: no such file or directory
E0416 18:03:44.896014   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:03:47.733563   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 18:03:50.716950   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:50.722248   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:50.732508   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:50.752744   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:50.792993   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:50.873364   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:51.033910   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:51.354710   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:51.995656   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:53.276390   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:03:55.836583   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:04:00.956856   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:04:06.852241   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:04:11.197830   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:04:15.418080   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/no-preload-368813/client.crt: no such file or directory
E0416 18:04:25.856746   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:04:31.678091   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:04:54.524029   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:54.529345   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:54.539671   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:54.559985   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:54.600275   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:54.680691   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:54.841125   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:55.161926   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:55.317370   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:55.322675   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:55.332909   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:55.353279   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:55.393564   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:55.473923   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:55.634339   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:55.802774   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:55.955100   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:56.595893   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:57.083998   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:04:57.876914   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:04:59.644225   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:05:00.437649   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:05:04.765259   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:05:05.558672   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:05:12.638645   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory
E0416 18:05:15.005666   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:05:15.799352   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:05:28.772465   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/flannel-726705/client.crt: no such file or directory
E0416 18:05:35.486000   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:05:36.279603   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:05:47.777673   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/enable-default-cni-726705/client.crt: no such file or directory
E0416 18:05:56.729227   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/auto-726705/client.crt: no such file or directory
E0416 18:06:16.446170   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/calico-726705/client.crt: no such file or directory
E0416 18:06:17.240423   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/kindnet-726705/client.crt: no such file or directory
E0416 18:06:18.156214   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:18.161480   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:18.171820   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:18.192085   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:18.232366   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:18.312909   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:18.473461   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:18.794162   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:19.435380   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:20.716373   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:23.277512   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:24.413444   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/auto-726705/client.crt: no such file or directory
E0416 18:06:28.397807   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/custom-flannel-726705/client.crt: no such file or directory
E0416 18:06:34.559545   10910 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/bridge-726705/client.crt: no such file or directory

                                                
                                    

Test skip (39/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.2/binaries 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.14
272 TestNetworkPlugins/group/kubenet 3.13
281 TestNetworkPlugins/group/cilium 4.31
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-376814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-376814
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-726705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-726705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:24:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.61.200:8443
name: NoKubernetes-496544
contexts:
- context:
cluster: NoKubernetes-496544
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:24:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: NoKubernetes-496544
name: NoKubernetes-496544
current-context: NoKubernetes-496544
kind: Config
preferences: {}
users:
- name: NoKubernetes-496544
user:
client-certificate: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/NoKubernetes-496544/client.crt
client-key: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/NoKubernetes-496544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-726705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-726705"

                                                
                                                
----------------------- debugLogs end: kubenet-726705 [took: 2.979105359s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-726705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-726705
--- SKIP: TestNetworkPlugins/group/kubenet (3.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-726705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-726705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18649-3628/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:24:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.61.200:8443
name: NoKubernetes-496544
contexts:
- context:
cluster: NoKubernetes-496544
extensions:
- extension:
last-update: Tue, 16 Apr 2024 17:24:33 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: NoKubernetes-496544
name: NoKubernetes-496544
current-context: NoKubernetes-496544
kind: Config
preferences: {}
users:
- name: NoKubernetes-496544
user:
client-certificate: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/NoKubernetes-496544/client.crt
client-key: /home/jenkins/minikube-integration/18649-3628/.minikube/profiles/NoKubernetes-496544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-726705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-726705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-726705"

                                                
                                                
----------------------- debugLogs end: cilium-726705 [took: 4.14690844s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-726705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-726705
--- SKIP: TestNetworkPlugins/group/cilium (4.31s)

                                                
                                    
Copied to clipboard